Comment by InsideOutSanta

Comment by InsideOutSanta 5 days ago

22 replies

I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.

willturman 5 days ago

In a corollary to Sturgeon's Law, I'd propose Altman's Law: "In the Age of AI, 99.999...% of everything is crap"

  • SimianSci 5 days ago

    Altman's Law: 99% of all content is slop

    I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.

    Has anyone looked at reviving PageRank?

    • Imustaskforhelp 5 days ago

      I mean Kagi is probably the PageRank revival we are talking about.

      I have heard from people here that Kagi can help remove slop from searches so I guess yeah.

      Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.

      So Kagi / DDG (Duckduckgo) yeah.

      • ectospheno 5 days ago

        I’ve been a Kagi subscriber for a while now. Recently picked up ChatGPT Business and now am considering dropping Kagi since I am only using it for trivial searches. Every comparison I’ve done with deep searches by hand and with AI ended up with the same results in far less time using AI.

      • jll29 5 days ago

        Does anyone have kept an eye of who uses what back-end?

        DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?

    • _kb 4 days ago

      For images surely this is the next pivot for hot dog / not hot dog.

techblueberry 5 days ago

There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?

  • jimbokun 4 days ago

    I think about this more and more when I see people online about their "agents managing agents" producing...something...24/7/365.

    Very rarely is there anything about WHAT these agents are producing and why it's important and valuable.

    • 2sk21 3 days ago

      Indeed - there is a lot of fake "productivity" going on with these swarms of agents

  • SequoiaHope 5 days ago

    To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.

  • wmeredith 4 days ago

    The value is in the same place: solving people's problems.

    Now that the code is cheaper (not free quite yet) skills further up the abstraction chain become more valuable.

    Programming and design skills are less valuable. However, you still have to know what to build: product and UX skills are more valuable. You still have to know how to build it: software architect skills are more valuable.

jcranmer 5 days ago

The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.

  • storystarling 5 days ago

    I run a small print-on-demand platform and this is exactly what we're seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We're pretty much back to manual review which destroys the unit economics.

    • lupire 5 days ago

      Why would detecting AI be more expensive than creating it?

    • direwolf20 5 days ago

      If it's print-on-demand, why does it matter? Why shouldn't you accept someone's money to print slop for them?

      • wmeredith 4 days ago

        Some book houses print on demand for wide audiences. It's not just for the author.

jll29 5 days ago

Soon, poor people will talk to a LLM, rich people will get human medical care.

  • Spivak 5 days ago

    I mean I'm currently getting "expensive" medical care and the doctors are still all using AI scribes. I wouldn't assume there would be a gap in anything other than perception. I imagine doctors that cater to the fuck you rich will just put more effort into hiding it.

    No one, at all levels, wants to do notes.

    • golem14 5 days ago

      My experience has been that the transcriptions are way more detailed and correct when doctors use these scribes.

      You could argue that not writing down everything provides a greater signal-noise ratio. Fair enough, but if something seemingly inconsequential is not noted and something is missed, that could worsen medical care.

      I'm not sure how this affects malpractice claims - It's now easier to prove (with notes) that the doc "knew" about some detail that would otherwise not have been note down.