Comment by refulgentis
Comment by refulgentis 2 days ago
You said "Good." then wrote a nice stirring bit about how having a bad experience with a 1T model will force people to try 4B/32B models.
That seems separate from the post it was replying to, about 1T param models.
If it is intended to be a reply, it hand waves about how having a bad experience with it will teach them to buy more expensive hardware.
Is that "Good."?
The post points out that if people are taught they need an expensive computer to get 1 token/second, much less try it and find out it's a horrible experience (let's talk about prefill), it will turn them off against local LLMs unnecessarily.
Is that "Good."?
Had you posted this comment in the early 90s about linux instead of local models, it would have made about the same amount of sense but aged just as poorly as this comment will.
I'll remain here happily using 2.something tokens / second model.