Comment by janalsncm

Comment by janalsncm 2 days ago

0 replies

As I understand it, BLT uses a small nn to tokenize but doesn’t change the attention mechanism. MTA uses traditional BPE for tokenization but changes the attention mechanism. You could use both (latency be damned!)