Comment by ACCount37
It's not a very promising direction because autoregressive LLMs still deliver better output quality per model weight, as a rule.
Now, is it possible that a model can combine advantages of both? Combine fast generation and multidirectional causality of diffusion with precision, capabilities and generalization of autoregression?
Maybe. This paper is research in that direction. So far, it's not a clear upgrade over autoregressive LLMs.
Diffusion LMs do seem to be able to get more out of the same data. In a world where we are already training transformer based LLMs on all text available, diffusion LMs ability to continue learning on a fixed set of data may be able to outperform transformers
https://arxiv.org/abs/2511.03276