Comment by bob1029

Comment by bob1029 2 days ago

4 replies

So, we're proposing a multiplicative increase of something that already scales quadratically with the context size?

I think we've already got a bit of a bottleneck in terms of memory bandwidth utilization.

kadushka 2 days ago

If you have a bottleneck in terms of memory bandwidth utilization, this method is great - it would utilize the idle compute.

EGreg 2 days ago

LLaMa 3 already has RoPE encoding which can handle arbitrarily long contexts (within reason)

https://arxiv.org/abs/2104.09864

The difference RoPE makes vs traditional positional encoding is that you just care about relative distances between tokens, and we can attenuate the attention over great distances.

Instead of making the model look at every token in the entire sequence all at once (which gets expensive fast), you can break the text into logical chunks—like sentences or paragraphs—and run self-attention within each chunk. That keeps things efficient while still capturing local meaning. Then, for each chunk, you create a summary—either by pooling or using a small learned head—and pass those summaries into a second layer of attention that operates on a much smaller scale. This gives you higher-level context across the document, kind of like moving from sentences to sections to the whole thing. Optionally, you can even send that higher-level context back down to influence the lower layers. This approach shows up in models like Longformer and BigBird (which use attention windows), hierarchical models (like HANs), and newer architectures like RetNet and Mamba that compress information over time or scale. RoPE fits neatly into this by helping each chunk handle relative positions more naturally.

RoPE is kind of perfect for this setup because it handles relative positions directly in the attention mechanism, which means each chunk can still understand the order and spacing of tokens without relying on fixed position embeddings. It’s especially useful when you're working with long sequences or chunked inputs, because it doesn’t care where the chunk is in the overall document—it just cares about how tokens relate to each other within that chunk. RoPE also makes it easier for models to generalize to longer inputs than they were trained on, since the rotational math behind it naturally extends beyond the original context window. Plus, because it's baked into the dot product itself, it adds no extra memory or computation, and plays well with hierarchical or multi-scale attention setups. Basically, it’s a clean, efficient way to inject positional awareness that doesn’t break when you start slicing things up.

PS: LLaMA's RoPE may be a bit off but it still works great: https://discuss.huggingface.co/t/is-llama-rotary-embedding-i...

cma 2 days ago

> allowing nearby queries and keys to affect each other's attention weights for more precise attention

If it is only nearby tokens it is multiplicative by a constant right? Not making it cubic scaling with context length or anything.

Deepseek got a training performance increase with two tokens at a time, though it doesn't go into the final model inference like this. They did say it can be used for speculative decode to reduce inference costs though.

They may get away with less attention heads with this new approach too.

jgalt212 2 days ago

Maybe Sam was right about needing one trillion dollars!