Comment by aurareturn
Comment by aurareturn 3 days ago
>* Claiming it's predictive text engine that isn't useful for anything
This one is very common on HN and it's baffling. Even if it's predictive text, who the hell cares if it achieves its goals? If an LLM is actually a bunch of dolphins typing on a keyboard made for dolphins, I could care less if it does what I need it to do. For people who continue to repeat this on HN, why? I just want to know out of my curiosity.
>* AI will never be able to [some pretty achievable task]
Also very common on HN.
You forgot the "AI will never be able to do what a human can do in the exact way a human does it so AI will never achieve x".
> Even if it's predictive text, who the hell cares if it achieves its goals?
Haha ... well in the literal sense it does achieve "its" goals, since it only had one goal which was to minimize its training loss. Mission accomplished!
OTOH, if you mean achieving the user's goals, then it rather depends on what those goals are. If the goal is to save you typing when coding, even if you need to check it all yourself anyway, then I guess mission accomplished there too!
Whoopee! AGI done! Thanks you Dolphins!