What is the best routine for graphics compression?
Re: What is the best routine for graphics compression?
No, because RCS actually turns the screen into 32x one-byte-wide vertical columns for each 2K chunk of screen, as well as re-ordering each column to be in continuous visual order. It's an extremely efficient transformation before doing any LZ type compression, as it still allows horizontal back-references within a 2K rolling window, but makes the more common vertical back-references very small (apart from the jump every screen third). I thought it was line-by-line, but a quick glance at his source code shows it uses columns.
The main thing going for MiniDict3 though is the compression algorithm runs within a few seconds on the Spectrum itself. It's not the most tightly packed data, but simple chunky images do very well with it. And I'm also not insisting on daft licensing controls.
The main thing going for MiniDict3 though is the compression algorithm runs within a few seconds on the Spectrum itself. It's not the most tightly packed data, but simple chunky images do very well with it. And I'm also not insisting on daft licensing controls.
- Lethargeek
- Manic Miner
- Posts: 743
- Joined: Wed Dec 11, 2019 6:47 am
Re: What is the best routine for graphics compression?
in no way this disproves what i'm saying
...and i'm saying that "LZ type" (or any byte-based) compression is inherently inefficient for zx graphics, regardless of transformations
the compression results speak for themselves, why don't you check yourself?
my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip
there's also the attribute optimizer as well: https://vtrd.in/pcutilz/OPALV1_0.zip
Re: What is the best routine for graphics compression?
Prase, could you share details how it works ?Lethargeek wrote: ↑Thu Dec 19, 2019 4:38 pm
my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip
Proud owner of Didaktik M
- Lethargeek
- Manic Miner
- Posts: 743
- Joined: Wed Dec 11, 2019 6:47 am
Re: What is the best routine for graphics compression?
copypasta from the old wos thread:
Basically, xoring maximizes the number of 2x2 pixels chunks (and indirectly the number of bigger square chunks) with none or just one ink pixel inside. Then all the 2x2 chunks are compressed using the same variable-size code for any screen:No screen reordering, goes through standard Spectrum 8x8 tiles one by one rather than bytes and vertical byte stripes. But first it applies one of several useful xor-filters (not like joefish's predictor from the RCS thread, but simpler one-directional 1-2 pixels shift then xor or with any already processed tile) separately for each tile. You can see the result in the file LgK-xored.bmp after running LgK-pack. These filters not just increase the number of zero bits for normal pics, but also make the remaining 1s distributed in such a fashion that 2x2 square pixel chunks can be packed well with certain prefix code (the same for almost any sensible image, with the result very close to optimal Huffman code packing in most cases, difference usually less than size of the Huffman tree). Again, you can see the statistics for different types of 2x2 chunks running LgK-pack. Filter choice for the each tile is Huffman-packed however, as this data is unpredictable for each screen. No RLE is used for tile data (right now, but maybe never).
Attributes are compressed differently, basically there's a choice of repeating one of the two adjacent former attrs (and in rare cases yet another method is used with two more choices skipping the adjacent ones to next colour) or just load the new attr value from the bitstream (or small list of most frequent values). Repeating the former action (not the former attr!) is coded with one bit, changing action costs 2-3 bits depending on the method. Then, longer sequences of repeats could be reduced with RLE (as there are many complex pics light on attributes, so here it's useful).
Attribute data can be placed (and depacked) either after the each tile or after the whole picture separately. Give "negative" address (same number, just add a minus) to LgK-sfx to have attributes intermixed with tile data (as in the demo). The generated decompressor size will be 2-3 bytes longer in this case.
Code: Select all
inks code
none 0
1 10xx
2 110xx
2 1110x
4 11110
3 11111xx
Re: What is the best routine for graphics compression?
I have tried Phantis (MAC):
zx7(ordered by columns) -> 6546
exomizer(ordered by columns) -> 6305
lgk -> 5713
and black-flag-deck:
zx7(ordered by columns) -> 1351
exomizer(ordered by columns) -> 1251
lgk -> 1280
zx7(ordered by columns) -> 6546
exomizer(ordered by columns) -> 6305
lgk -> 5713
and black-flag-deck:
zx7(ordered by columns) -> 1351
exomizer(ordered by columns) -> 1251
lgk -> 1280
Proud owner of Didaktik M
- Lethargeek
- Manic Miner
- Posts: 743
- Joined: Wed Dec 11, 2019 6:47 am
Re: What is the best routine for graphics compression?
I couldn't find this by name, but i suspect it's half empty or overly regular. As i planned to make a compressor for an arbitrary rectangular area and the decompressor was big enough already, i never optimized for "lighter" screens (and RLE is only for attributes). On a big selection of screens LgK is usually much better (hundreds of bytes) for about 90% and very close (maybe a bit worse) for the rest.
Theres are also a few pathological cases (see R-Tape's avatar here or a low-res imitation like Alien Syndrome)) with much worse ratios compared to "byte compressors", the result of very limited number of unique bytes for most of the screen area and less (if any) correlation between the adjacent pixels.
Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
Re: What is the best routine for graphics compression?
Indeed, it is.Lethargeek wrote: ↑Tue Dec 24, 2019 3:14 pmI couldn't find this by name, but i suspect it's half empty or overly regular.
Ah, it is pitty you didn't. The benchmarks for full screen Are intetesting but I think for in-game graphics, smaller pictures are more relevant. And LZ variants have really hard times with small screens, especially when they have fixed offset sizes.As i planned to make a compressor for an arbitrary rectangular area
I understand but random dither are hard to pin in general, so I don't sebe it as a problem.Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
To be honest I'm quite surprised how well LGK works.
One thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
Proud owner of Didaktik M
- Lethargeek
- Manic Miner
- Posts: 743
- Joined: Wed Dec 11, 2019 6:47 am
Re: What is the best routine for graphics compression?
maybe i will address this in future versions, but right now i have too little free time when my brains are in working condition
fun fact: once i found one particularly bad dithered specimen and later it turned out the source was a greyscale picture with 8x1 wide pixels
and another: serious pc arithmetic coding software failed at the EFMB title screen (an example of "abnormal correlation") even worse
and it still suboptimal very much!
- Lethargeek
- Manic Miner
- Posts: 743
- Joined: Wed Dec 11, 2019 6:47 am
Re: What is the best routine for graphics compression?
do you have any estimates about the possible zx decompressor size?catmeows wrote: ↑Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
Re: What is the best routine for graphics compression?
Not much, I guess. 200 - 250 bytes perhaps.Lethargeek wrote: ↑Tue Jan 07, 2020 6:03 pmdo you have any estimates about the possible zx decompressor size?catmeows wrote: ↑Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
Proud owner of Didaktik M
-
- Microbot
- Posts: 134
- Joined: Tue Jun 09, 2020 6:14 am
- Contact:
Re: What is the best routine for graphics compression?
I d
Good question. I don't know.Nienn Heskil wrote: ↑Wed Aug 26, 2020 5:53 pmNecropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?catmeows wrote: ↑Wed Oct 02, 2019 7:15 am https://github.com/emmanuel-marty/lzsa/ ... /README.md
Here is quite handy chart showing compression/speed ratio.
Proud owner of Didaktik M
Re: What is the best routine for graphics compression?
It is not an official version. When I was writing my own decompressor for Pletter, I realized that I can slightly change the format and allow for longer match buffer (up to 16K instead of the up to 8K in the Pletter 0.5c1). It worked fairly well, so I labeled it "Pletter 5d" and left it on the diagram. I am not really planning to release it, because it seems to be a bit of a dead-end technology. LZSA2 would work better for big(ger) data files with a lot of long matches, MegaLZ works better for data with a lot of small matches (e.g. graphics files). I do not think Pletter is particularly competitive.Nienn Heskil wrote: ↑Wed Aug 26, 2020 5:53 pmNecropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?