Comment by matheist

Comment by matheist 2 days ago

4 replies

Looks interesting! Is there any intuition for why this should be the case? Did you discover it via that intuition, or just random experimentation?

A note, your install script appears to still have a placeholder at the "apply patch" step. A suggestion, might be more user-friendly to fork llama.cpp and then include that as a git submodule rather than make it a "git clone and apply patch" step.

A further note, everyone and their dog has a different local python set-up, might be nice to let people separate the llama.cpp stuff from the python stuff rather than bake in a dependence on homebrew python.

dipampaul17 2 days ago

Great question about the intuition! The difference comes from the core roles these components play in attention.

Keys determine which tokens to attend to - they create the actual attention pattern through similarity calculations. Values only store what information gets passed forward once attention is decided.

When a key vector is quantized too aggressively, it distorts the similarity calculations for every token interaction. A small error in keys can completely redirect attention to the wrong tokens.

Values, however, are much more forgiving. When a value vector is quantized, any error only affects the specific information content of that single token after the attention pattern is already established.

It's like a library catalog system vs. the books themselves. If catalog numbers (keys) are corrupted, you'll look in completely wrong sections. If some words in books (values) are smudged, you're still reading the right book - just with occasional noise.

Mathematically, keys participate in softmax calculations where small errors get exponentially amplified through the normalization process. Values just undergo linear weighted averaging, where errors tend to cancel out.

I first encountered this asymmetry in papers like "More for Keys, Less for Values" and "KV-AdaQuant," but wanted to quantify exactly how it impacts Apple Silicon inference. The 7× quality difference between K8V4 and K4V8 using identical memory was striking.

Thanks for the installation feedback too! I'll fix the placeholder and make the Python dependencies more flexible.

  • vlovich123 2 days ago

    My understanding is that the roles of KVQ aren’t actually well understood and that while they’re called key/value/query tensors it’s not quite straightforward to tease out what they mean or the role they play.

Aurornis 2 days ago

> A note, your install script appears to still have a placeholder at the "apply patch" step. A suggestion, might be more user-friendly to fork llama.cpp and then include that as a git submodule rather than make it a "git clone and apply patch" step.

The patch doesn't actually apply to llama.cpp because argument parsing was moved to arg.cpp 8 months ago.

That doesn't matter, though, because the options to set K and V quantization were added to llama.cpp in 2023.

I don't understand why the patch exists at all, other than as an attempt to make this look novel by changing the settings through a different command line argument?

I would strongly recommend that nobody run an install.sh file from a new repo like this, especially when it's not necessary for something as simple as applying a patch file.