Comment by evantbyrne

Comment by evantbyrne 2 days ago

35 replies

Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.

marcelr 2 days ago

> Make it make sense.

Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.

  • doug_durham 2 days ago

    These arguments are becoming tropes with little influence. Find better arguments.

    • jimbokun 2 days ago

      Does the truth of the arguments have no bearing?

      • Aperocky 2 days ago

        An argument can both be true and irrelevant.

        • johnnyanmac a day ago

          Okay, you saying it's irrelevant doesn't make it so. You don't control how people feel about stuff.

    • marcelr 2 days ago

      haha this sounds like a slave master saying “again, free the slaves? really? i’ve heard that 100s of times, be more original”

    • franktankbank a day ago

      Arguably you shouldn't trifle your argument by decorating it when fundamentally it is rock solid. I wonder if the author would consider just walking away from tech when they realize what a useless burden its become for everyone.

ausbah 2 days ago

i think when ppl mean AI they mean “LLMs in every consumer facing production”

  • evantbyrne 2 days ago

    You might be right, and I think tech professionals should be expected to use industry terminology correctly.

    • 63stack 2 days ago

      There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.

      • dkdcio 2 days ago

        counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry

        I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago

        I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?

        • satvikpendem a day ago

          When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.

      • fragmede 2 days ago

        You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.

        The industry joke is: What do you call AI that works? Machine Learning.

insane_dreamer 2 days ago

What do LLMs have to do with typing on phones, cancer research, or TTS?

Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.

I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.

  • Legend2440 2 days ago

    I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.

    I wish I had Walmart in my area, the grocery stores here suck.

    • insane_dreamer 2 days ago

      It is a similar example. Just like you and I have different options about whether Walmart is a net benefit or net detriment to society, people have starkly different opinions as to whether LLMs are a net benefit or net detriment to society.

      People who believe it's a net detriment don't want to be a part of enabling that, even at cost to themselves, while those who think it's a net benefit or at least neutral, don't have a problem with it.

    • johnnyanmac a day ago

      You really need to research "The Wal Mart effect" before spouting that again. They literally named the phenomenon of what happens after them.

      If your goal is to not contribute to community and leave when it dries up, sure. Walmart is great short term relief.

  • [removed] 2 days ago
    [deleted]
johnnyanmac a day ago

They are a marketing firm, so the stance within their craft is much more narrow than cancer.

Also, we clearly aren't prioritizing cancer research if Altman has shifted to producing slop videos. That's why sentiment is decreasing.

>Make it make sense.

I can't explain to one who doesn't want to understand.

blamestross 2 days ago

Intentionally or not, you are presenting a false equivalency.

I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.

  • rosslh 2 days ago

    One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.

    • th0ma5 2 days ago

      It is unethical to me to provide an accessibility tool that lies.

      • Legend2440 2 days ago

        LLMs do not lie. That implies agency and intentionality that they do not have.

        LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.

        • johnnyanmac a day ago

          >That implies agency and intentionality that they do not have.

          No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.

    • blamestross 2 days ago

      If it was actually being given away as an accessiblity tool, then I would agree with you.

      It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.

      • satvikpendem a day ago

        1. Intellectual property is a fiction that should not exist.

        2. Open source models exist.

  • evantbyrne 2 days ago

    How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.

    • johnnyanmac a day ago

      >Consider my comment a reminder that ethical use of AI has been around of quite some

      You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.

    • blamestross 2 days ago

      You have reasonably available context here. "This year" seems more than enough on it's own.

      I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.

  • satvikpendem a day ago

    Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?

mmcromp 2 days ago

There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db