Comment by light_hue_1

Comment by light_hue_1 4 days ago

4 replies

Marcus will distort anything to push his agenda and to get clout.

Just because openai might be over valued and there are a lot of ai grifters doesn't mean LLMs aren't delivering.

They're astronomically better than they were 2 years ago. And they continue to improve. At some point they might run into a wall, but for now, they're getting better all the time. And real multimodal models are coming down the pipeline.

It's so sad to see Marcus totally lose it. He was once a reasonable person. But his idea of how AI should work was didn't work out. And instead of accepting that and moving forward, or finding a way to adapt, he just decided to turn into a fringe nutjob.

skybrian 4 days ago

I would say “mild” rather than “astronomical” improvement as far as end-user applications are concerned, at least for the things I use every day. Copilot-style autocomplete in VS Code isn’t much better and the answers to my TypeScript questions on OpenAI (and now Claude) have only mildly improved.

Perhaps I’ve missed out. Is your experience different? What are you doing now that you weren’t doing before?

  • grugagag 4 days ago

    I think the answer is they jumped all in and they are fully incorporating it into their workflow. If you’re not, like I am, you have a different experience and that is obvious of course. But objectively you probably are right about mild improvements as I feel the same. But I can’t speak as far as the all in experience. I may be missing overall but usually am set in my ways until something convinces me to reset my ways. LLMs aren’t making the dent though I have to admit I use it at least once a week and am happy with that use alone.

klabb3 4 days ago

> he just decided to turn into a fringe nutjob.

No dog in the fight here, but this reads like FUD, at least given the context of this post. There is a range between hype and skepticism in debate which is healthy, and that range would naturally be larger within a domain that is so poorly understood as gen AIs emergent properties. If this is “fringe nutjob” levels of skepticism, then what would be reasonable?

int_19h 4 days ago

2 years ago is a rather arbitrary cutoff point - it would be around the time of GPT-3.5. But the original GPT-4 was out in March, 2023, and I can't say that the current state of OpenAI's model is a massive improvement on that. In fact, in some respects, I'd say the newer stuff is dumber.