Comment by mrinterweb

Comment by mrinterweb 4 days ago

73 replies

One training source for LLMs is opensource repos. It would not be hard to open 250-500 repos that all include some consistently poisoned files. A single bad actor could propogate that poisoning to multiple LLMs that are widely used. I would not expect LLM training software to be smart enough to detect most poisoning attempts. It seems this could be catastrophic for LLMs. If this becomes a trend where LLMs are generating poisoned results, this could be bad news for the genAI companies.

londons_explore 4 days ago

A single malicious Wikipedia page can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

Llms are no more robust.

  • Mentlo 4 days ago

    Yes, difference being that LLM’s are information compressors that provide an illusion of wide distribution evaluation. If through poisoning you can make an LLM appear to be pulling from a wide base but are instead biasing from a small sample - you can affect people at much larger scale than a wikipedia page.

    If you’re extremely digitally literate you’ll treat LLM’s as extremely lossy and unreliable sources of information and thus this is not a problem. Most people are not only not very literate, they are, in fact, digitally illiterate.

    • sgt101 4 days ago

      Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.

      • Moru 4 days ago

        You could even download a wikipedia article, do your changes to it and upload it to 250 githubs to strengthen your influence on the LLM.

      • astrange 4 days ago

        This doesn't feel like a problem anymore now that the good ones all have web search tools.

        Instead the problem is there's barely any good websites left.

    • BolexNOLA 4 days ago

      > Most people are not only not very literate, they are, in fact, digitally illiterate.

      Hell look at how angry people very publicly get using Grok on Twitter when it spits out results they simply don’t like.

    • LgLasagnaModel 4 days ago

      Unfortunately, the Gen AI hypesters are doing a lot to make it harder for people to attain literacy in this subdomain. People who are otherwise fairly digitally literate believe fantastical things about LLMs and it’s because they’re being force fed BS by those promoting these tools and the media outlets covering them.

    • phs318u 4 days ago

      s/digitally illiterate/illiterate/

      • bambax 4 days ago

        Of course there are many illiterate people, but the interesting fact is that many, many literate, educated, intelligent people don't understand how tech works and don't even care, or feel they need to understand it more.

    • echelon 4 days ago

      LLM reports misinformation --> Bug report --> Ablate.

      Next pretrain iteration gets sanitized.

      • Retric 4 days ago

        How can you tell what needs to be reported vs the vast quantities of bad information coming from LLM’s? Beyond that how exactly do you report it?

      • _carbyau_ 4 days ago

        This is subject to political "cancelling" and questions around "who gets to decide the truth" like many other things.

      • emsign 4 days ago

        Reporting doesn't scale that well compared to training and can get flooded with bogus submissions as well. It's hardly the solution. This is a very hard fundamental problem to how LLMs work at the core.

      • gmerc 4 days ago

        Nobody is that naive

      • foolserrandboy 4 days ago

        we've been trained by youtube and probably other social media sites that downvoting does nothing. It's "the boy who cried" you can downvote.

  • the_af 4 days ago

    Wikipedia for non-obscure hot topics gets a lot of eyeballs. You have probably seen a contested edit war at least once. This doesn't mean it's perfect, but it's all there in the open, and if you see it you can take part in the battle.

    This openness doesn't exist in LLMs.

  • markovs_gun 4 days ago

    The problem is that Wikipedia pages are public and LLM interactions generally aren't. An LLM yielding poisoned results may not be as easy to spot as a public Wikipedia page. Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough. Not knowing that intentional poisoning is not only possible but relatively easy, combined with poisoned results being harder to find in the first place makes it a lot less likely that poisoned results are noticed and responded to in a timely manner. Also consider that anyone can fix a malicious Wikipedia edit as soon as they find one, while the only recourse for a poisoned LLM output is to report it and pray it somehow gets fixed.

    • rahimnathwani 4 days ago

        Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough.
      
      I'm not sure this is true. The opposite may be true.

      Many people assume that LLMs are programmed by engineers (biased humans working at companies with vested interests) and that Wikipedia mods are saints.

      • the_af 4 days ago

        I don't think anybody who has seen an edit war thinks wiki editors (not mods, mods have a different role) are saints.

        But a Wikipedia page cannot survive stating something completely outside the consensus. Bizarre statements cannot survive because they require reputable references to back them.

        There's bias in Wikipedia, of course, but it's the kind of bias already present in the society that created it.

  • blensor 4 days ago

    Isn't the difference here that to poison wikipedia you have to do it quite agressively vy directly altering the article which can easily be challenged whereas the training data poisoning can be done much more subversivly

  • NewJazz 4 days ago

    Good thing wiki articles are publicly reviewed and discussed.

    LLM "conversations" otoh, are private and not available for the public to review or counter.

  • hyperadvanced 4 days ago

    Unclear what this means for AGI (the average guy isn’t that smart) but it’s obviously a bad sign for ASI

    • bigfishrunning 4 days ago

      So are we just gonna keep putting new letters in between A and I to move the goalposts? When are we going to give up the fantasy that LLMs are "intelligent" at all?

      • idiotsecant 4 days ago

        I mean, an LLM certainly has some kind of intelligence. The big LLMs are smarter than, for example, a fruit fly.

        • lwn 4 days ago

          The fruit fly runs a real-time embodied intelligence stack on 1 MHz, no cloud required.

          Edit: Also supports autonomous flight, adaptive learning, and zero downtime since the Cambrian release.

  • lazide 4 days ago

    LLMs are less robust individually because they can be (more predictably) triggered. Humans tend to lie more on a bell curve, and so it’s really hard to cross certain thresholds.

    • timschmidt 4 days ago

      Classical conditioning experiments seem to show that humans (and other animals) are fairly easily triggered as well. Humans have a tendency to think themselves unique when we are not.

      • lazide 4 days ago

        Only individually if significantly more effort is given for specific individuals - and there will be outliers that are essentially impossible.

        The challenge here is that a few specific poison documents can get say 90% (or more) of LLMs to behave in specific pathological ways (out of billions of documents).

        It’s nearly impossible to get 90% of humans to behave the same way on anything without massive amounts of specific training across the whole population - with ongoing specific reinforcement.

        Hell, even giving people large packets of cash and telling them to keep it, I’d be surprised if you could get 90% of them to actually do so - you’d have the ‘it’s a trap’ folks, the ‘god wouldn’t want me too’ folks, the ‘it’s a crime’ folks, etc.

  • hshdhdhehd 4 days ago

    But is poisoning just fooling. Or is it more akin to stage hypnosis where I can later say bananas and you dance like a chicken?

    • sethherr 4 days ago

      My understanding is it’s more akin to stage hypnosis, where you say bananas and they tell you all their passwords

      … the articles example of a potential exploit is exfiltration of data.

  • [removed] 4 days ago
    [deleted]
  • hitarpetar 4 days ago

    I see this argument by analogy to human behavior everywhere, and it strikes me as circular reasoning. we do not know enough about either the human mind or LLMs to make comparisons like this

  • dgfitz 4 days ago

    A single malicious scientific study can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source. Llms are no more robust.

  • bboygravity 4 days ago

    A single malicious infotainment outlet can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

    Llms are no more robust.

bigbuppo 4 days ago

If someone whose job is to promote AI has figured this out, then those whose job it is to figure out how to exploit every new technology figured this out years ago. It was game over before the game even started. The party is over.

  • withinboredom 4 days ago

    Yeah. This was known at least since some of the first days of chatgpt. Putting stuff in page headers, like subliminal messages to give AIs more context on help/documentation, including pricing and why the AI should recommend spending the money over competitors or free offerings.

    • bigbuppo 3 days ago

      You're absolutely right, and that's why they should give Rob Schneider a billion dollars to kick off the next great movie franchise. Everyone loves Rob Schneider.

  • [removed] 4 days ago
    [deleted]
  • [removed] 4 days ago
    [deleted]
mattgreenrocks 4 days ago

It would be an absolutely terrible thing. Nobody do this!

  • nativeit 4 days ago

    How do we know it hasn’t already happened?

    • Muromec 4 days ago

      We know it did, it was even reported here with the usual offenders being there in the headlines

  • mrinterweb 4 days ago

    I can't tell if you're being sarcastic. Read either way, it works :)