Comment by catigula
Comment by catigula 15 days ago
A lot of really smart people working on problems that don't even really need to be solved is an interesting aspect of market allocation.
Comment by catigula 15 days ago
A lot of really smart people working on problems that don't even really need to be solved is an interesting aspect of market allocation.
I think the problem I see with this type of response is that it doesn't take into context the waste of resources involved. If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?
And if AI was truly the holy grail that it's being sold as then there wouldn't be 700M users per week wasting all of these resources as heavily as we are because generative AI would have already solved for something better. It really does seem like these platforms are, and won't be, anywhere as useful as they're continuously claimed to be.
Just like Tesla FSD, we keep hearing about a "breakaway" model and the broken record of AGI. Instead of getting anything exceptionally better we seem to be getting models tuned for benchmarks and only marginal improvements.
I really try to limit what I'm using an LLM for these days. And not simply because of the resource pigs they are, but because it's also often a time sink. I spent an hour today testing out GPT-5 and asking it about a specific problem I was solving for using only 2 well documented technologies. After that hour it had hallucinated about a half dozen assumptions that were completely incorrect. One so obvious that I couldn't understand how it had gotten it so wrong. This particular technology, by default, consumes raw SSE. But GPT-5, even after telling it that it was wrong, continued to give me examples that were in a lot of ways worse and kept resorting to telling me to validate my server responses were JSON formatted in a particularly odd way.
Instead of continuing to waste my time correcting the model I just went back to reading the docs and GitHub issues to figure out the problem I was solving for. And that led me down a dark chain of thought: so what happens when the "teaching" mode rethinks history, or math fundamentals?
I'm sure a lot of people think ChatGPT is incredibly useful. And a lot of people are bought into not wanting to miss the boat, especially those who don't have any clue to how it works and what it takes to execute any given prompt. I actually think LLMs have a trajectory that will be similar to social media. The curve is different and I, hopefully, don't think we've seen the most useful aspects of it come to fruition as of yet. But I do think that if OpenAI is serving 700M users per week then, once again, we are the product. Because if AI could actually displace workers en masse today you wouldn't have access to it for $20/month. And they wouldn't offer it to you at 50% off for the next 3 months when you go to hit the cancel button. In fact, if it could do most of the things executives are claiming then you wouldn't have access to it at all. But, again, the users are the product - in very much the same way social media played into.
Finally, I'd surmise that of those 700M weekly users less than 10% of those sessions are being used for anything productive that you've mentioned and I'd place a high wager that the 10% is wildly conservative. I could be wrong, but again - we'd know about that if it were the actual truth.
> If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?
Is everything you spend resources on truly productive?
Who determines whether something is worth it? Is price/willingness of both parties to transact not an important factor?
I don't think ChatGPT can do most things I do. But it does eliminate drudgery.
I don't believe everything in my world is as efficient as it could be. But I genuinely think about the costs involved [0]. When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one? And at that cost differential?
We know a few things: LLMs are not efficient, LLMs are consuming more water than traditional compute, we know the providers know but they haven't shared any tangible metrics, and the build process involves, also, an exceptional amount of time, wattage and water.
For me it's: if you have access to a supercomputer do you use it to tell you a joke or work on a life saving medicine?
We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do". It seems as though the lines of fatalism and paradox are in full force for a lot of the arguments around AI.
I think the real kicker for me this week (and it changes week-over-week, which is at least entertaining) is when Paul Graham told his Twitter feed [1] a "hotshot" programmer is writing 10k LOC that are not "bug-filled crap" in 12 hours. That's 14 LOC per minute. Compared to industry norms of 50-150 LOC per 8 hour day. Apparently,this "hot-shot" is not "naive", though, implying that it's most definitely legit.
[0] https://www.sciencenews.org/article/ai-energy-carbon-emissio... [1] https://x.com/paulg/status/1953289830982664236
> so what happens when the "teaching" mode rethinks history, or math fundamentals?
The person attempting to learn either (hopefully) figures out the AI model was wrong, or sadly learns the wrong material. The level of impact is probably quite relative to how useful the knowledge is one's life.
The good or bad news, depending on how you look at it, is that humans are already great at rewriting history and believing wrong facts, so I am not entirely sure an LLM can do that much worse.
Maybe ChatGPT might just kill of the ignorant like it already has? GPT already told a user to combine bleach and vinegar, which produces chlorine gas. [1]
Reminds me of our president
>> People are starving to death ...
> The only solution to those people starving to death is to kill the people that benefit from them starving to death.
There are solutions other than "to kill the people that benefit", such as what have existed for many years, including but not limited to:
- Efforts such as the recently emasculated USAID[0].
- Humanitarian NGO's[1] such as the World Central Kitchen[2]
and the Red Cross[3].
- The will of those who could help to help those in need[4].
Note that none of the aforementioned require executions nor engineering prowess.0 - https://en.wikipedia.org/wiki/United_States_Agency_for_Inter...
1 - https://en.wikipedia.org/wiki/Non-governmental_organization
2 - https://wck.org/
3 - https://en.wikipedia.org/wiki/International_Red_Cross_and_Re...
> People are starving to death and the world's brightest engineers are ...
This is a political will, empathy, and leadership problem. Not an engineering problem.
>>> People are starving to death and the world's brightest engineers are ...
>> This is a political will, empathy, and leadership problem. Not an engineering problem.
> Those problems might be more tractable if all of our best and brightest were working on them.
The ability to produce enough food for those in need already exists, so that problem is theoretically solved. Granted, logistics engineering[0] is a real thing and would benefit from "our best and brightest."
What is lacking most recently, based on empirical observation, is a commitment to benefiting those in need without expectation of remuneration. Or, in other words, empathetic acts of kindness.
Which is a "people problem" (a.k.a. the trio I previously identified).
Famine in the modern world is almost entirely caused by dysfunctional governments and/or armed conflicts. Engineers have basically nothing to do with either of those.
This sort of "there are bad things in the world, therefore focusing on anything else is bad" thinking is generally misguided.
Famine is mostly political but engineers (not all of them) definitely have to do with it. If you’re building powerful AI for corporations that are then involved with the political entities that caused the famine, then you can’t claim to basically have nothing to do with it.
They won’t be honest and explain it to you but I will. Takes like the one you’re responding to are from loathsome pessimistic anti-llm people that are so far detached from reality they can just confidently assert things that have no bearing on truth or evidence. It’s a coping mechanism and it’s basically a prolific mental illness at this point
And what does that make you? A "loathsome clueless pro-llm zealot detached from reality"? LLMs are essentially next word predictors marketed as oracles. And people use them as that. And that's killing them. Because LLMs don't actually "know", they don't "know that they don't know", and won't tell you they are inadequate when they are. And that's a problem left completely unsolved. At the core of very legitimate concerns about the proliferation of LLMs. If someone here sounds irrational and "coping", it very much appears to be you.
> working on problems that don't even really need to be solved
Very, very few problems _need_ to be solved. Feeding yourself is a problem that needs to be solved in order for you to continue living. People solve problems for different reasons. If you don't think LLMs are valuable, you can just say that.
The few problems humanity has that need to be solved:
1. How to identify humanity's needs on all levels, including cosmic ones...(we're in the Space Age so we need to prepare ourselves for meeting beings from other places)
2. How to meet all of humanity's needs
Pointing this out regularly is probably necessary because the issue isn't why people are choosing what they're doing...it's that our systems actively disincentivize collectibely addressing these two problems in a way that doesn't sacrifice people's wellbeing/lives... and most people don't even think about it like this.
Well, we all thought advertising was the worst thing to come out of the tech industry, someone had to prove us wrong!
Can you explain what you mean about 'not needing to be solved'? There are versions of that kind of critique that would seem, at least on the surface, to better apply to finance or flash trading.
I ask because scaling an system that a substantially chunk of the population finds incredibly useful, including for the more efficient production of public goods (scientific research, for example) does seem like a problem that a) needs to be solved from a business point of view, and b) should be solved from a civic-minded point of view.