Comment by bigyabai
Maybe, but even that fourth-order metric is missing key performance details like context length and model size/sparsity.
The bigger takeaway (IMO) is that there will never really be hardware that scales like Claude or ChatGPT does. I love local AI, but it stresses the fundamental limits of on-device compute.