Comment by wartywhoa23

Comment by wartywhoa23 2 days ago

142 replies

> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

So let's all just give zero fucks about our moral values and just multiply monetary ones.

simianwords 2 days ago

>So let's all just give zero fucks about our moral values and just multiply monetary ones.

You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.

That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.

If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.

You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.

  • vkou 2 days ago

    > If AI can make things 1000x more efficient,

    Is that the promise of the faustian bargain we're signing?

    Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?

    • ovi256 2 days ago

      While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week

      • rightbyte 2 days ago

        40h is probably up from pre-industrial times.

        Edit: There is some research covering work time estimates for different ages.

      • johnnyanmac 2 days ago

        >you'll probably consume a lot more, while still working a full week

        There's more to cosume than 50 years ago, but I don't see that trend continuing. We shifted phone bills to cell phone bills and added internet bills and a myriad of subscriptions. But that's really it. everything was "turn one time into subscrition".

        I don't see what will fundamentally shift that current consumption for the next 20-30 years. Just more conversion of ownership to renting. In entertainment we're already seeing revolts against this as piracy surges. I don't know how we're going to "consume a lot more" in this case.

      • wizzwizz4 2 days ago

        I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.

      • amrocha 2 days ago

        That sounds like a nightmare. Let’s sell out a generation so that we can consume more. Wow.

        • johnnyanmac 2 days ago

          Boomers in a nutshell. Do a bunch of stuff to keep from building more housing to prop up housing prices (which is much of their net worth), and then spend until you're forced to spend the last bit to keep yourselves alive.

          Then the hospital takes the house to pay off the rest of the debts. Everybody wins!

    • arthurfirst 2 days ago

      They signed it for you as there will be 1000x less workers needed so they didn't need to ask anymore.

    • neutronicus 2 days ago

      You will probably be dead.

      But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.

  • johnnyanmac 2 days ago

    >They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.

    And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.

    I don't think using the 2025 US government for a moral compass is helping your case either.

    >If AI can make things 1000x more efficient

    Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?

ako 2 days ago

AI is just a tool, like most other technologies, it can be used for good and bad.

Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.

  • Throaway1985232 2 days ago

    The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).

  • jimbokun 2 days ago

    You are thinking too small.

    The goal of AI is NOT to be a tool. It's to replace human labor completely.

    This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

    To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.

  • jpadkins 2 days ago

    I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)

  • johnnyanmac 2 days ago

    >Where are you going to draw the line?

    How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).

    I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.

    >AI as a technology is almost impossible to stop.

    Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.

    Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.

    • joquarky a day ago

      So only the wealthy can afford legal, medical, and financial advice in your hypothetical?

      • wartywhoa23 a day ago

        What makes you think that in the world where only the wealthy can afford legal, medical, and financial advice from human beings, the same will be automatically affordable from AI?

        It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?

      • johnnyanmac a day ago

        If you're going to approach this on such bad faith, then I'll simply say "yes" and move on. People can male bad decisions, but that shouldn't be a profitable business.

  • georgemcbay 2 days ago

    > AI is just a tool, like most other technologies, it can be used for good and bad.

    The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

    I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

    However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.

    • nradov 2 days ago

      Regardless of whether you use AI or social media, your happiness (or lack thereof) is largely under your own control.

      • johnnyanmac 2 days ago

        >your happiness (or lack thereof) is largely under your own control.

        Not really. Or at least, "just be happy" isn't a good response to someone homeless and jobless.

    • marcosdumay 2 days ago

      > The same could be said of social media

      Yes, absolutely.

      Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.

      • satvikpendem 2 days ago

        > In fact, mots people here have seen examples of it done in a good way.

        Like this very website we're on, proving the parent's point in fact.

  • _heimdall 2 days ago

    If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

    AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

    I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.

    • _heimdall 2 days ago

      Silent down votes, any explanations or counter points?

      • satvikpendem 2 days ago

        Because no one defines AI the way you seem to do here. LLMs and machine learning are in the field of artificial intelligence, AI.

tjwebbnorfolk 2 days ago

What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.

You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.

  • malfist 2 days ago

    Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"

    • satvikpendem 2 days ago

      Using LLMs doesn't kill people, I'm sure there are some exceptions like OpenAI's suicide that was in the news, but not to the degree of oxycontin.

      • johnnyanmac 2 days ago

        >Using LLMs doesn't kill people

        Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.

        It's way past "some exceptions" at this point.

      • tinfoilhatter 2 days ago

        Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.

    • tjwebbnorfolk 2 days ago

      Your comment is valid as a criticism of an "unfettered free market", but further proves my point that things that work will win.

idiotsecant 2 days ago

That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.

  • dripdry45 2 days ago

    This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?

senordevnyc 2 days ago

[flagged]

  • PaulDavisThe1st 2 days ago

    The age old question: do people get what they want, or do they want what they (can) get?

    Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?

  • easyThrowaway 2 days ago

    > when the market is telling you loud and clear they want X

    Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.

    [1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...

    [2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...

    • Glemkloksdjf 2 days ago

      He is right though:

      "Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”

      ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.

      AI Transcription of Videos is now a really cool and helpful feature in MS Teams.

      Segment Anything literaly leapfroged progress on image segmentation.

      You can generate any image you want in high quality in just a few seconds.

      There are already human beings being shitier in their daily job than a LLM is.

    • simianwords 2 days ago

      1) it was failure of specific implementation

      2) if you had read the paper you wouldn’t use it as an example here.

      Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.

      • easyThrowaway 2 days ago

        No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".

        I bet a few Pets.com exec were also wondering why people weren't impressed with their website.

        [1]https://en.wikipedia.org/wiki/Pets.com

        • simianwords 2 days ago

          Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.

    • classified 2 days ago

      Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.

  • nothrabannosir 2 days ago

    [flagged]

    • clickety_clack 2 days ago

      This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.

      • brazukadev 2 days ago

        Which founder is wrong? Not only the brainwashed here are entrepreneurs

    • simianwords 2 days ago

      What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.

      This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.

      • nothrabannosir 2 days ago

        That's definitely not what I am doing, nor implying, and while you're free to think it, please don't put words in my mouth.

      • techpression 2 days ago

        Yet to see anything good come from it, and I’m not talking about machine learning for specific use cases.

        And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?

    • senordevnyc 2 days ago

      Are you going to hire him?

      If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.

      • nothrabannosir 2 days ago

        You mean, when evaluating suppliers, do I push for those who don't use AI?

        Yes.

        I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.

        > Don’t write a blog post whining about your morals,

        why on earth not?

        I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?

  • DonHopkins 2 days ago

    Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.

    So is it rational for a web design company to take a moral stance that they won't use JavaScript?

    Is there a market for that, with enough clients who want their JavaScript-free work?

    Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?