Comment by MisterTea

Comment by MisterTea 18 hours ago

11 replies

This is something I have been curious about in terms of how an LLM's achieves compression.

I would like to know what deviations are in the output as this almost feels like a game of telephone where each re-compression results in a loss of data which is then incorrectly reconstructed. Sort of like misremembering a story and as you tell it over time the details change slightly.

Scaevolus 18 hours ago

When LLMs predict the next token, they actually produce a distribution of the probability of each of the possible next tokens, and the sampler chooses one of them, and not necessarily the most likely one!

If instead you run LLM prediction and then encode the probability of the next token of the input text you want to encode (from the cumulative distribution, a number in [0, 1]) using arithmetic coding, you can run the same operation in reverse to achieve lossless compression.

The tricky part is ensuring that your LLM executes absolutely deterministically, because you need to make sure that the encoder and decoder have the same probability distribution map at each step.

AnotherGoodName 17 hours ago

Yes. The secret is in understanding arithmetic coding. https://en.wikipedia.org/wiki/Arithmetic_coding

Arithmetic coding takes a prediction of the next bit and writes out exactly as many bits as needed to correct that prediction. The amazing part is that you can write out fractional bits. Eg. You predict the next bit is '1' with 75% probability? If it is 1 you only need to write out 1/2 of a bit (correcting that 25% portion). If it's 0 you need to write out 2.4bits. It may seem strange to work with 1/2 a bit but it works! (essentially the other half of the bit represents other future correction required). You might have heard of huffman coding which can't deal with fractional bits, arithmetic coding is a generalization of huffman coding that can deal with this.

Arithmetic coding is mathematically perfect at what it does. You will not waste a single bit using this algorithm to encode data given a prediction of that data.

So the entirety of modern compression techniques don't deal with the encoding/decoding side at all. What they deal with is modelling the data so far and making the most accurate prediction possible on the next bit of data (next byte also works, but working 1 bit at a time is easier to comprehend when learning arithmetic coding).

Incidentally the encoders and decoders essentially work exactly the same. Given the data read or data decoded so far predict the next bit. This part is exactly the same either way. The decoder would read the compressed file for the correction and the encoder would read the input file and write out the correction. The important part is "predict the next bit". This is what separates all the different compressors.

This is also where those of us experienced in this area try to correct people on the wrong track. A compression algorithm is never about the encoding side but instead 100% always about the prediction of the data. Can you build a model that can accurately predict what the next data to come is? That's what you need to do to make a better file compressor. The entropy encoding part is a completely solved problem already, don't bother re-solving that.

  • billforsternz 13 hours ago

    I don't understand. On average, for every 4 input bits we will get it right 3 times writing 0.5 bits each time and get it wrong once writing 2.4 bits once. So we write a total of 3 * 0.5 + 2.4 bits = 3.9 bits. The compressed output is 3.9/4 = 97.5% as big as the input. Not very compelling. What am I misunderstanding?

    • AnotherGoodName 13 hours ago

      I back of the enveloped it wrong is what :(.

      It's -log2(0.75) for getting a 75% chance right and -log2(0.25) for getting it wrong. I should have stated .4 bits and 2bits respectively not 0.5 and 2.4. Sorry! Good catch.

      It's 3.2 vs 4bits. Now that may not seem huge but the probabilities tend to be at the more extreme ends if the predictor is any good. Once you start going towards the 99% range you get extreme efficiency.

  • ChadNauseam 16 hours ago

    That's true for lossless compression. Is there a generalization for lossy compression?

    • AnotherGoodName 15 hours ago

      Lossy and lossless are essentially fundamentally the same actually. All state of the art lossy compressors use arithmetic coding as an example and they still do prediction. Eg. your favourite video codecs predict not only the next bit in the 2D frame, but also the next bit when modelling past frames (becomes a 3D cube of data at that point) and they also do things like motion prediction of individual objects in frame to help make a better prediction. They all use arithmetic encoders to encode the data.

      Where the lossy part comes in is the point at which humans notice/don't notice data being thrown away. Got a bit that was waaay out of line in the prediction and going to cost you 10bits to correct? Perhaps to humans it doesn't matter? Can we throw it away? This throwing away of data is often done before the prediction+compression stage (eg. maybe quantizing the color space to 8bits from 32bits?) but from there on it's the same thing.

      • eru 14 hours ago

        To simplify: lossy compression = lossless compression + a perception model that can tell you what aspect of the data you can safely toss away without anyone noticing.

      • ChadNauseam 14 hours ago

        Thanks! That's really enlightening. Maybe this can help me settle a question I've had. If I have 4k video and I want a smaller file size (but suppose I still have a 4k tv), I always felt like I should get better quality at that file size by compressing it further than by reducing the resolution. Rather than deciding for myself what data to throw away, why not let the compression algorithm do it?

        • AnotherGoodName 14 hours ago

          Lowering the resolution to match the typical TV resolutions is sensible but beyond that trusting the codec is always the better way. The codecs will adaptively compress portions of the video differently. The top left area that's in shadow for the next 30seconds? Maybe that areas effective resolution can be lowered? Etc. They can do things like this!

          If you change the resolution or color space of the entire file you do that without consideration to where the extra details might have been needed.

          So resolution should match typical output resolutions exactly and from there it's all on the codec.