It’s been a very hard year
(bell.bz)435 points by surprisetalk 2 days ago
435 points by surprisetalk 2 days ago
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
It's also the exact same human beings who were doing the NFT and metaverse bullshit and insisting they were the next best things and had to jump ship to the next "Totally going to change everything" grift because the first two reached the end of their runs.
I wonder what their plan was before LLMs seemed promising?
These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.
In the case of the author, their market isn't LLM makers directly, it's the people who use those LLMs, so the author's market is much bigger and isn't susceptible to collapse if LLM makers go bankrupt (because they can just go back to what they are already doing now pre-LLM), quite the opposite as this post shows.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
Careful now, if they get their way, they’ll be both the market and the government.
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Par for the course
> we won’t work on product marketing for AI stuff, from a moral standpoint
Can someone explain this?
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.
Should people not look for reasons to be concerned?
i have noticed this pattern too frequently https://wiki.gentoo.org/wiki/Project:Council/AI_policy
see the diversity of views.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
I'm just some random moron, but I just clicked on TFA, and it looks like a very pretty ad.
What am I missing?
Previously: https://news.ycombinator.com/item?id=46070842
Well, glad this one wasn't flagged by the AI defenders. It was an interesting and frank look at the situation.
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
> Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
>When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing.
No, that's the most important situation to consider the moral thing. My slightly younger peers years back were telling everyone to eat tide pods. That's a pretty important time to say "no, that's a really stupid idea", even if you don't get internet clout.
I'd hope the tech community of all people would know what it's like to resist peer pressure. But alas.
>But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I don't see that at all in the article. Quite the opposite here actually. I just see a person being transparent about their business and morals and commentors here using it to try and say "yea but I like AI". Nothing here attacked y'all for liking it. The author simply has his own lines.
LLMs themselves are not a fad or overhyped. Even my mum (almost 70) is using an LLM nowadays, and she hates computers. Adapt or die and I don’t say this happily. I hate that I have to use an LLM to stay competitive, something I’m not used to in my life. I was always competitive bare-mind. Now I need to be armed with an LLM.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I have a family member that produces training courses for salespeople; she's doing fantastic.
This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.
Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.
>Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I think courses like these are peak "DIY". These aren't courses teaching you to RTFM. It's teaching you how to think deeper and find the edge cases and develop philosophy. That's knowledge worth its weight in gold. Unlike React tutorial #32456 this is showing us how things really work "under the hood".
I'd happily pay for that. If I could.
>don't sell to musicians. They don't have any money
But programmers traditionally do have money?
>if you're against AI, you might have missed how good of a learning tool LLMs can be.
I don't think someone putting their business on the line with their stance needs yet another HN squeed on why AI actually good. Pretty sure they've thought deeply of this.
"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
My domain is games. It's a battlefield out there (pun somewhat intended). I ain't touching anything Gen-AI until we figure out what the hell is going on with regards to copyright, morality of artists, and general "not look like shit"-ness.
Sad part is I probably will still be accused of using AI. But I'll still do my best.
I'm critical of AI because of climate change. Training and casual usage of AI takes a lot of resources. The electricity demand is way too high. We have made great progress in bringing a lot of regenerative energy to the grid, but AI eats up a huge part of it, so that other sectors can't decarbonize as much.
We are still nowhere near to get climate change under control. AI is adding fuel to the fire.
I noticed a phenomenon on this post - many people are tying this person's business decisions to some sort of moral framework, or debating the morality of their plight.
"Moral" is mentioned 91 times at last count.
Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?
EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?
Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
>here's a lot of real meat to it that will obviously survive the hype cycle.
Okay. When the hype cycle dies we can re-evaluate. Stances aren't set in stone.
>If you won't even lean into things like this
I'm sure Andy knows what kind of business was in his clients and used that to inform his acceptance/rejection of projects. It mentions web marketing so it doesn't seem like much edutech crossed ways here.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.