What is the best routine for graphics compression?

The place for codemasters or beginners to talk about programming any language for the Spectrum.
User avatar
Joefish
Rick Dangerous
Posts: 2059
Joined: Tue Nov 14, 2017 10:26 am

Re: What is the best routine for graphics compression?

Post by Joefish »

Generic byte packers aren't always the best at compressing graphics though. If you can take advantage of some knowledge of how graphics data is more likely to repeat, then you can optimise the compressor to pack that data. Einar's re-arranges a staggered Spectrum screen to line-by-line order to bring potential repetitions closer together, which makes them easier for an LZ77-type algorithm to compress. My alogrithm pre-processes the graphics to turn common repetitions into zeroes, which my packing routine can optimise above all other values, a bit like a RLE algorithm would do.
User avatar
Joefish
Rick Dangerous
Posts: 2059
Joined: Tue Nov 14, 2017 10:26 am

Re: What is the best routine for graphics compression?

Post by Joefish »

Hmm. Just been meddling with a screen compressor that gives some really good results on the selection of screens I was using. It gets Einar's Cobra pic down to just 1945 bytes. But try it on that pack of Russian screens and it's not so good. It 'compresses' MAC's Phantis screen 'down' to only 7123 bytes! :lol: (that's more than it was originally, in case you didn't notice)

But on a couple of examples like that BatmanTheCapedCrusader I get 4957 bytes compared to 5457 (I think) for RCS + ZX7
It works best on patches of colour and lines - multiple levels of stipple and colour changes just make it worse. It might be good for something like that Hobbit update.

I should probably tweak the algorithm so it just stops trying when faced with too much changing data... :geek:
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

How it works ?
Honestly - a decent benchmark suite of screens would really help. And I don't mean just full size screens but also in-game illustrations, for example from Case of missing swan and even smaler examples, something of size 6*6 chars like pictures from Rychle Sipy.
Proud owner of Didaktik M
User avatar
Joefish
Rick Dangerous
Posts: 2059
Joined: Tue Nov 14, 2017 10:26 am

Re: What is the best routine for graphics compression?

Post by Joefish »

Well, with smaller graphics there's the question of whether you're expecting to compress the data that makes up a small graphic, and what order the bytes for that are in, or whether you mean to paste the graphic onto an otherwise empty screen and compress the whole screen. As they're different tasks. A whole screen has a fixed format that you can exploit. Maybe known sized graphics have formats that are exploitable too. But arbitrary data of variable length is something different.

My 'MiniDict' technique is to XOR each attribute with the one before, which turns repetitions into zeros.

I then XOR each pixel line with the one above, which again reduces blocks of pixels to just lines and dots around the edges. (What it's not good at is stipples).

Then I do a frequency analysis of what's left, and pick out the 16 most commonly used bytes (other than 0). Again, this takes advantage of the dots that are left by the XOR process.

The encoding is a bitstream as follows:

16 x bytes - common byte mini-dictionary
0 bit -> 0 byte
1,0 bit -> read next four bits, look up byte in mini-dicitonary
1,1 bit -> read next 8 bits as literal byte

I've just added a refinement that after six 0 bits, it then either flags a 1 to continue otherwise it contains a binary number of how many more zero bytes to skip. This helps it skip over large blank areas of the screen. I can post some code later. But it actually runs on a Speccy. Or at least, in an emulator.
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

Einar Saukas wrote: Mon Sep 16, 2019 5:06 pm To compress full screen .scr images, I'm sure you will get the best results using RCS+ZX7 combined together.
Nope. :P Let me remind you of this 3 years old wos thread:

https://www.worldofspectrum.org/forums/ ... compressor

ZX7 (RCS or not) like other Spectrum screen compression tools i'm aware of all have one fundamental flaw: they compress bytes (ie strips) instead of square chunks. In most cases squares have better correlation between the pixels, hence better possible compression ratio.

Also chunky methods are better for compressing small size pictures (ie sprites) - even separate ones. Me planned to write a sprite compressor as well using similar approach but got distracted by other projects and everyday problems. :(
User avatar
Joefish
Rick Dangerous
Posts: 2059
Joined: Tue Nov 14, 2017 10:26 am

Re: What is the best routine for graphics compression?

Post by Joefish »

No, because RCS actually turns the screen into 32x one-byte-wide vertical columns for each 2K chunk of screen, as well as re-ordering each column to be in continuous visual order. It's an extremely efficient transformation before doing any LZ type compression, as it still allows horizontal back-references within a 2K rolling window, but makes the more common vertical back-references very small (apart from the jump every screen third). I thought it was line-by-line, but a quick glance at his source code shows it uses columns.

The main thing going for MiniDict3 though is the compression algorithm runs within a few seconds on the Spectrum itself. It's not the most tightly packed data, but simple chunky images do very well with it. And I'm also not insisting on daft licensing controls.
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

Joefish wrote: Thu Dec 19, 2019 3:57 pm No, because RCS actually turns the screen into 32x one-byte-wide vertical columns for each 2K chunk of screen, as well as re-ordering each column to be in continuous visual order.
in no way this disproves what i'm saying
Joefish wrote: Thu Dec 19, 2019 3:57 pm It's an extremely efficient transformation before doing any LZ type compression,
...and i'm saying that "LZ type" (or any byte-based) compression is inherently inefficient for zx graphics, regardless of transformations

the compression results speak for themselves, why don't you check yourself?

my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip

there's also the attribute optimizer as well: https://vtrd.in/pcutilz/OPALV1_0.zip
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Thu Dec 19, 2019 4:38 pm
my zx-pk links from the old thread are dead now but the tool is also available on vtrd: https://vtrd.in/pcutilz/!LGKV1.1.zip
Prase, could you share details how it works ?
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Dec 23, 2019 12:53 pm could you share details how it works ?
copypasta from the old wos thread:
No screen reordering, goes through standard Spectrum 8x8 tiles one by one rather than bytes and vertical byte stripes. But first it applies one of several useful xor-filters (not like joefish's predictor from the RCS thread, but simpler one-directional 1-2 pixels shift then xor or with any already processed tile) separately for each tile. You can see the result in the file LgK-xored.bmp after running LgK-pack. These filters not just increase the number of zero bits for normal pics, but also make the remaining 1s distributed in such a fashion that 2x2 square pixel chunks can be packed well with certain prefix code (the same for almost any sensible image, with the result very close to optimal Huffman code packing in most cases, difference usually less than size of the Huffman tree). Again, you can see the statistics for different types of 2x2 chunks running LgK-pack. Filter choice for the each tile is Huffman-packed however, as this data is unpredictable for each screen. No RLE is used for tile data (right now, but maybe never).

Attributes are compressed differently, basically there's a choice of repeating one of the two adjacent former attrs (and in rare cases yet another method is used with two more choices skipping the adjacent ones to next colour) or just load the new attr value from the bitstream (or small list of most frequent values). Repeating the former action (not the former attr!) is coded with one bit, changing action costs 2-3 bits depending on the method. Then, longer sequences of repeats could be reduced with RLE (as there are many complex pics light on attributes, so here it's useful).

Attribute data can be placed (and depacked) either after the each tile or after the whole picture separately. Give "negative" address (same number, just add a minus) to LgK-sfx to have attributes intermixed with tile data (as in the demo). The generated decompressor size will be 2-3 bytes longer in this case.
Basically, xoring maximizes the number of 2x2 pixels chunks (and indirectly the number of bigger square chunks) with none or just one ink pixel inside. Then all the 2x2 chunks are compressed using the same variable-size code for any screen:

Code: Select all

inks	code

none	0
1	10xx
2	110xx
2	1110x
4	11110
3	11111xx
On top of that, empty 4x4 and 8x8 chunks might be coded with an extra leading zero bit (this is decided for the whole screen).
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

I have tried Phantis (MAC):

zx7(ordered by columns) -> 6546
exomizer(ordered by columns) -> 6305
lgk -> 5713

and black-flag-deck:

zx7(ordered by columns) -> 1351
exomizer(ordered by columns) -> 1251
lgk -> 1280
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Tue Dec 24, 2019 10:26 am black-flag-deck:
I couldn't find this by name, but i suspect it's half empty or overly regular. As i planned to make a compressor for an arbitrary rectangular area and the decompressor was big enough already, i never optimized for "lighter" screens (and RLE is only for attributes). On a big selection of screens LgK is usually much better (hundreds of bytes) for about 90% and very close (maybe a bit worse) for the rest.

Theres are also a few pathological cases (see R-Tape's avatar here :D or a low-res imitation like Alien Syndrome)) with much worse ratios compared to "byte compressors", the result of very limited number of unique bytes for most of the screen area and less (if any) correlation between the adjacent pixels.

Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Tue Dec 24, 2019 3:14 pm
catmeows wrote: Tue Dec 24, 2019 10:26 am black-flag-deck:
I couldn't find this by name, but i suspect it's half empty or overly regular.
Indeed, it is.
As i planned to make a compressor for an arbitrary rectangular area
Ah, it is pitty you didn't. The benchmarks for full screen Are intetesting but I think for in-game graphics, smaller pictures are more relevant. And LZ variants have really hard times with small screens, especially when they have fixed offset sizes.
Also pics with less regular dithering (like Floyd-Steinberg) may go either way, but usually the chunky method is good enough. .
I understand but random dither are hard to pin in general, so I don't sebe it as a problem.
To be honest I'm quite surprised how well LGK works.

One thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
Proud owner of Didaktik M
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Jan 06, 2020 5:15 pmAh, it is pitty you didn't. The benchmarks for full screen Are intetesting but I think for in-game graphics, smaller pictures are more relevant. And LZ variants have really hard times with small screens, especially when they have fixed offset sizes.
maybe i will address this in future versions, but right now i have too little free time when my brains are in working condition :(
catmeows wrote: Mon Jan 06, 2020 5:15 pmI understand but random dither are hard to pin in general, so I don't sebe it as a problem.
fun fact: once i found one particularly bad dithered specimen and later it turned out the source was a greyscale picture with 8x1 wide pixels :lol:
and another: serious pc arithmetic coding software failed at the EFMB title screen (an example of "abnormal correlation") even worse :twisted:
catmeows wrote: Mon Jan 06, 2020 5:15 pmTo be honest I'm quite surprised how well LGK works.
and it still suboptimal very much! :)
User avatar
Lethargeek
Manic Miner
Posts: 743
Joined: Wed Dec 11, 2019 6:47 am

Re: What is the best routine for graphics compression?

Post by Lethargeek »

catmeows wrote: Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
do you have any estimates about the possible zx decompressor size?
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

Lethargeek wrote: Tue Jan 07, 2020 6:03 pm
catmeows wrote: Mon Jan 06, 2020 5:15 pmOne thing I would really like to try one day is proper context -> prediction -> range coder packer.
I mean using pixels on west, north-west,north to form a 3-bit context and kind of range coder. It would be inevitably slow (I did some estimations and for 8-bit probabilities multiplying 16-bit limits I got liitle less than 1000 t-states per pixel -> about 400 ~ 500 bytes / sec) but it would really catch the cases where you have small illustration + text on screen.
do you have any estimates about the possible zx decompressor size?
Not much, I guess. 200 - 250 bytes perhaps.
Proud owner of Didaktik M
Nienn Heskil
Microbot
Posts: 134
Joined: Tue Jun 09, 2020 6:14 am
Contact:

Re: What is the best routine for graphics compression?

Post by Nienn Heskil »

catmeows wrote: Wed Oct 02, 2019 7:15 am
Bedazzle wrote: Wed Oct 02, 2019 4:58 am What about megalz packer?
https://github.com/emmanuel-marty/lzsa/ ... /README.md

Here is quite handy chart showing compression/speed ratio.
Necropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
catmeows
Manic Miner
Posts: 718
Joined: Tue May 28, 2019 12:02 pm
Location: Prague

Re: What is the best routine for graphics compression?

Post by catmeows »

I d
Nienn Heskil wrote: Wed Aug 26, 2020 5:53 pm
catmeows wrote: Wed Oct 02, 2019 7:15 am https://github.com/emmanuel-marty/lzsa/ ... /README.md

Here is quite handy chart showing compression/speed ratio.
Necropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
Good question. I don't know.
Proud owner of Didaktik M
introspec
Dizzy
Posts: 53
Joined: Wed Jan 02, 2019 2:26 pm
Location: London
Contact:

Re: What is the best routine for graphics compression?

Post by introspec »

Nienn Heskil wrote: Wed Aug 26, 2020 5:53 pmNecropost: what's 'Pletter5d'? Is there a newer version after 0.5c1?
It is not an official version. When I was writing my own decompressor for Pletter, I realized that I can slightly change the format and allow for longer match buffer (up to 16K instead of the up to 8K in the Pletter 0.5c1). It worked fairly well, so I labeled it "Pletter 5d" and left it on the diagram. I am not really planning to release it, because it seems to be a bit of a dead-end technology. LZSA2 would work better for big(ger) data files with a lot of long matches, MegaLZ works better for data with a lot of small matches (e.g. graphics files). I do not think Pletter is particularly competitive.
Post Reply