What is the best routine for graphics compression?

The place for codemasters or beginners to talk about programming any language for the Spectrum.
User avatar
Joefish
Rick Dangerous
Posts: 2058
Joined: Tue Nov 14, 2017 10:26 am

Re: What is the best routine for graphics compression?

Post by Joefish »

No, because RCS actually turns the screen into 32x one-byte-wide vertical columns for each 2K chunk of screen, as well as re-ordering each column to be in continuous visual order. It's an extremely efficient transformation before doing any LZ type compression, as it still allows horizontal back-references within a 2K rolling window, but makes the more common vertical back-references very small (apart from the jump every screen third). I thought it was line-by-line, but a quick glance at his source code shows it uses columns.

The main thing going for MiniDict3 though is the compression algorithm runs within a few seconds on the Spectrum itself. It's not the most tightly packed data, but simple chunky images do very well with it. And I'm also not insisting on daft licensing controls.
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

Joefish wrote: Thu Dec 19, 2019 3:57 pm No, because RCS actually turns the screen into 32x one-byte-wide vertical columns for each 2K chunk of screen, as well as re-ordering each column to be in continuous visual order.
in no way this disproves what i'm saying
Joefish wrote: Thu Dec 19, 2019 3:57 pm It's an extremely efficient transformation before doing any LZ type compression,
...and i'm saying that "LZ type" (or any byte-based) compression is inherently inefficient for zx graphics, regardless of transformations

the compression results speak for themselves, why don't you check yourself?

my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip

there's also the attribute optimizer as well: https://vtrd.in/pcutilz/OPALV1_0.zip
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Thu Dec 19, 2019 4:38 pm
my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip
Prase, could you share details how it works ?
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Dec 23, 2019 12:53 pm could you share details how it works ?
copypasta from the old wos thread:
No screen reordering, goes through standard Spectrum 8x8 tiles one by one rather than bytes and vertical byte stripes. But first it applies one of several useful xor-filters (not like joefish's predictor from the RCS thread, but simpler one-directional 1-2 pixels shift then xor or with any already processed tile) separately for each tile. You can see the result in the file LgK-xored.bmp after running LgK-pack. These filters not just increase the number of zero bits for normal pics, but also make the remaining 1s distributed in such a fashion that 2x2 square pixel chunks can be packed well with certain prefix code (the same for almost any sensible image, with the result very close to optimal Huffman code packing in most cases, difference usually less than size of the Huffman tree). Again, you can see the statistics for different types of 2x2 chunks running LgK-pack. Filter choice for the each tile is Huffman-packed however, as this data is unpredictable for each screen. No RLE is used for tile data (right now, but maybe never).

Attributes are compressed differently, basically there's a choice of repeating one of the two adjacent former attrs (and in rare cases yet another method is used with two more choices skipping the adjacent ones to next colour) or just load the new attr value from the bitstream (or small list of most frequent values). Repeating the former action (not the former attr!) is coded with one bit, changing action costs 2-3 bits depending on the method. Then, longer sequences of repeats could be reduced with RLE (as there are many complex pics light on attributes, so here it's useful).

Attribute data can be placed (and depacked) either after the each tile or after the whole picture separately. Give "negative" address (same number, just add a minus) to LgK-sfx to have attributes intermixed with tile data (as in the demo). The generated decompressor size will be 2-3 bytes longer in this case.
Basically, xoring maximizes the number of 2x2 pixels chunks (and indirectly the number of bigger square chunks) with none or just one ink pixel inside. Then all the 2x2 chunks are compressed using the same variable-size code for any screen:

Code: Select all

inks	code

none	0
1	10xx
2	110xx
2	1110x
4	11110
3	11111xx
On top of that, empty 4x4 and 8x8 chunks might be coded with an extra leading zero bit (this is decided for the whole screen).
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

I have tried Phantis (MAC):

zx7(ordered by columns) -> 6546
exomizer(ordered by columns) -> 6305
lgk -> 5713

and black-flag-deck:

zx7(ordered by columns) -> 1351
exomizer(ordered by columns) -> 1251
lgk -> 1280
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Tue Dec 24, 2019 10:26 am black-flag-deck:
I couldn't find this by name, but i suspect it's half empty or overly regular. As i planned to make a compressor for an arbitrary rectangular area and the decompressor was big enough already, i never optimized for "lighter" screens (and RLE is only for attributes). On a big selection of screens LgK is usually much better (hundreds of bytes) for about 90% and very close (maybe a bit worse) for the rest.

Theres are also a few pathological cases (see R-Tape's avatar here :D or a low-res imitation like Alien Syndrome)) with much worse ratios compared to "byte compressors", the result of very limited number of unique bytes for most of the screen area and less (if any) correlation between the adjacent pixels.

Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Tue Dec 24, 2019 3:14 pm
catmeows wrote: Tue Dec 24, 2019 10:26 am black-flag-deck:
I couldn't find this by name, but i suspect it's half empty or overly regular.
Indeed, it is.
As i planned to make a compressor for an arbitrary rectangular area
Ah, it is pitty you didn't. The benchmarks for full screen Are intetesting but I think for in-game graphics, smaller pictures are more relevant. And LZ variants have really hard times with small screens, especially when they have fixed offset sizes.
Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
I understand but random dither are hard to pin in general, so I don't sebe it as a problem.
To be honest I'm quite surprised how well LGK works.

One thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Jan 06, 2020 5:15 pmAh, it is pitty you didn't. The benchmarks for full screen Are intetesting but I think for in-game graphics, smaller pictures are more relevant. And LZ variants have really hard times with small screens, especially when they have fixed offset sizes.
maybe i will address this in future versions, but right now i have too little free time when my brains are in working condition :(
catmeows wrote: Mon Jan 06, 2020 5:15 pmI understand but random dither are hard to pin in general, so I don't sebe it as a problem.
fun fact: once i found one particularly bad dithered specimen and later it turned out the source was a greyscale picture with 8x1 wide pixels :lol:
and another: serious pc arithmetic coding software failed at the EFMB title screen (an example of "abnormal correlation") even worse :twisted:
catmeows wrote: Mon Jan 06, 2020 5:15 pmTo be honest I'm quite surprised how well LGK works.
and it still suboptimal very much! :)
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
do you have any estimates about the possible zx decompressor size?
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Tue Jan 07, 2020 6:03 pm
catmeows wrote: Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
do you have any estimates about the possible zx decompressor size?
Not much, I guess. 200 - 250 bytes perhaps.
Proud owner of Didaktik M
Nienn Heskil
Microbot
Posts: 134
Joined: Tue Jun 09, 2020 6:14 am
Contact:

Re: What is the best routine for graphics compression?

Post by Nienn Heskil »

catmeows wrote: Wed Oct 02, 2019 7:15 am
Bedazzle wrote: Wed Oct 02, 2019 4:58 am What about megalz packer?
https://github.com/emmanuel-marty/lzsa/ ... /README.md

Here is quite handy chart showing compression/speed ratio.
Necropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

I d
Nienn Heskil wrote: Wed Aug 26, 2020 5:53 pm
catmeows wrote: Wed Oct 02, 2019 7:15 am https://github.com/emmanuel-marty/lzsa/ ... /README.md

Here is quite handy chart showing compression/speed ratio.
Necropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
Good question. I don't know.
Proud owner of Didaktik M
introspec
Dizzy
Posts: 53
Joined: Wed Jan 02, 2019 2:26 pm
Location: London
Contact:

Re: What is the best routine for graphics compression?

Post by introspec »

Nienn Heskil wrote: Wed Aug 26, 2020 5:53 pmNecropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
It is not an official version. When I was writing my own decompressor for Pletter, I realized that I can slightly change the format and allow for longer match buffer (up to 16K instead of the up to 8K in the Pletter 0.5c1). It worked fairly well, so I labeled it "Pletter 5d" and left it on the diagram. I am not really planning to release it, because it seems to be a bit of a dead-end technology. LZSA2 would work better for big(ger) data files with a lot of long matches, MegaLZ works better for data with a lot of small matches (e.g. graphics files). I do not think Pletter is particularly competitive.
Post Reply