Comment by eadmund

Comment by eadmund 2 days ago

3 replies

> the Web at large is full of slop generated by large language models, written by no one to communicate nothing

That’s neither fair nor accurate. That slop is ultimately generated by the humans who run those models; they are attempting (perhaps poorly) to communicate something.

> two companies that I already despise

Life’s too short to go through it hating others.

> it's very likely because they are creating a plagiarism machine that will claim your words as its own

That begs the question. Plagiarism has a particular definition. It is not at all clear that a machine learning from text should be treated any differently from a human being learning from text: i.e., duplicating exact phrases or failing to credit ideas may in some circumstances be plagiarism, but no-one is required to append a statement crediting every text he has ever read to every document he ever writes.

Credits: every document I have ever read grin

miningape 2 days ago

This is just the "guns don't shoot people, people do." argument except in this case we quite literally have a massive upside incentive to remove people from the process entirely (i.e. websites that automatically generate new content every day) - so I don't buy it.

This kind of AI slop is quite literally written by no one (an algorithm pushed it out), and it doesn't communicate anything since communication first requires some level of understanding of the source material - and LLM's are just predicting the likely next token without understanding. I would also extend this to AI slop written by someone with a limited domain understanding, they themselves have nothing new to offer, nor the expertise or experience to ensure the AI is producing valuable content.

I would go even further and say it's "read by no one" - people are sick and tired of reading the next AI slop article on google and add stuff like "reddit" to the end of their queries to limit the amount of garbage they get.

Sure there are people using LLMs to enhance their research, but a vast, vast majority are using it to create slop that hits a word limit.

slashdave 2 days ago

> It is not at all clear that a machine learning from text should be treated any differently from a human being learning from text

Given that LLMs and human creativity work on fundamentally different principles, there is every reason to believe there is a difference.

weevil 2 days ago

I feel like you're giving certain entities too much credit there. Yes text is generated to do _something_, but it may not be to communicate in good-faith; it could be keyword-dense gibberish designed to attract unsuspecting search engine users for click revenue, or generate political misinformation disseminated to a network of independent-looking "news" websites, or pump certain areas with so much noise and nonsense information that those spaces cannot sustain any kind of meaningful human conversation.

The issue with generative 'AI' isn't that they generate text, it's that they can (and are) used to generate high-volume low-cost nonsense at a scale no human could ever achieve without them.

> Life’s too short to go through it hating others

Only when they don't deserve it. I have my doubts about Google, but I've no love for OpenAI.

> Plagiarism has a particular definition ... no-one is required to append a statement crediting every text he has ever read

Of course they aren't, because we rightly treat humans learning to communicate differently from training computer code to predict words in a sentence and pass it off as natural language with intent behind it. Musicians usually pay royalties to those whose songs they sample, but authors don't pay royalties to other authors whose work inspired them to construct their own stories maybe using similar concepts. There's a line there somewhere; falsely equating plagiarism and inspiration (or natural language learning in humans) misses the point.