An enhancement of J-Bit encoding algorithm applied in file compressor system / Bautista, Jemimah B. and Canada, Reineir S. 6
By: Bautista, Jemimah B. and Canada, Reineir S. 4 0 16 [, ] | [, ] |
Contributor(s): 5 6 [] |
Language: Unknown language code Summary language: Unknown language code Original language: Unknown language code Series: ; March 2019.46Edition: Description: 28 cm. 62 ppContent type: text Media type: unmediated Carrier type: volumeISBN: ISSN: 2Other title: 6 []Uniform titles: | | Subject(s): -- 2 -- 0 -- -- | -- 2 -- 0 -- 6 -- | 2 0 -- | -- -- 20 -- | | -- -- -- -- 20 -- | -- -- -- 20 -- --Genre/Form: -- 2 -- Additional physical formats: DDC classification: | LOC classification: | | 2Other classification:| Item type | Current location | Home library | Collection | Call number | Status | Date due | Barcode | Item holds |
|---|---|---|---|---|---|---|---|---|
| Book | PLM | PLM Archives | Filipiniana-Thesis | QA76.9.B38.2019 (Browse shelf) | Available | FT7080 |
Thesis: (BSCS major in Computer Science) -Pamantasan ng Lungsod ng Maynila, 2019. 56
5
ABSTRACT: Data compression is the process of encoding, modifying, or converting the bits structure of data in such a way it consumes less disk space. One of the newest data compression algorithms is the J-Bit Encoding Algorithm. It was invented by Agus Dwia Suarjaya in 2013. This algorithm uses manipulation of bits inside the file. The algorithm investigates every bit of the file and analyzes it for compression. Using this algorithm, we found out that the algorithm produces a much better larger file after compression when it is compressing files that are mixes of images and text or audio and video. Also, the algorithm does not read the contents of the compressed file making it unable to be decompressed when the local copies of the data I and II is absent which are produced by the algorithm. And lastly, the algorithm does not attain one of the most important roles which is not to lose any bits of data because J-bit encoding algorithm is a lossless compression algorithm. These problems are solved by first, adding 2 different algorithms before and after the existing algorithm namely, the Move-to-Front Transform and the Lempel-Ziv-Welch algorithm, respectively. Second, using a zip file to consolidate all the needed files that will be used in decompression. Lastly, using of the Apache Commons IO API that reconstructed the file based only on the inputted data stream and on the file extension that it extracted during the compression process. We recommend this enhanced algorithm to be further enhance by optimizing the processes involved, using of another transform algorithm for transforming wide range of symbols, and compressing multiple files.
5

There are no comments for this item.