Comment by ow5
Hi! one of the contributors to the paper — we have kernels not released yet that can shave down decoding latency by >20%.
Also when we ran experiments for streaming with the current kernels, we were median ~1.3x slower at inference
Hi! one of the contributors to the paper — we have kernels not released yet that can shave down decoding latency by >20%.
Also when we ran experiments for streaming with the current kernels, we were median ~1.3x slower at inference
Thanks for chiming in! How do you explain the top-most graph in Figure 5? Am I misreading it?