Comment by RossBencina
Comment by RossBencina 7 days ago
One claim from that podcast was that the xLSTM attention mechanism is (in practical implementation) more efficient than (transformer) flash attention, and therefore promises to significantly reduces the time/cost of test-time compute.
Test it out here:
https://github.com/NX-AI/mlstm_kernels
https://huggingface.co/NX-AI/xLSTM-7b