Comment by pona-a
I wonder if these N-gram reduced models, augmented with confidence measures, can act as a very fast speculative decoder. Or maybe the sheer number of explicit rules unfolded from the compressed latent representation will make it impractical.
I'd also like to see a list of similarly-simple techniques for extracting rules where ML researchers could automatically try them all. In this case, the N-gram rules would be the starting point. For what predictions failed, they'd try to throw in the other techniques. Eventually most or all of the predictions should be captured by one or more simple rules. Some might be compound rules mixing techniques.
I think there will also be benefits to that both in interpretability and hardware acceleration. In time, maybe cheaper pretraining of useful models.