Comment by Aurornis

Comment by Aurornis a day ago

23 replies

> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

karaterobot a day ago

> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

  • potato3732842 12 hours ago

    Having a surface level understanding of what you're looking at is a huge part of OSINT.

    These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.

    • jerf 12 hours ago

      At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.

      The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.

      [1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law

      • potato3732842 11 hours ago

        What you're describing for Reddit is farcically charitable except in cases where you could just google it yourself. What you're describing for the LLM is what Reddit does when any judgement is involved.

        I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.

        On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.

        But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.

        • iszomer 8 hours ago

          That is why it is ideal to ask it double-sided questions to test its biases as well as your own. Simply googling it is not enough when most people don't think to customize their search anyway, compounded by the fact that indexed sources may have changed or have been deprecated over time.

low_tech_love 19 hours ago

The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.

torginus 15 hours ago

And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.

jart a day ago

Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

It's always more comfortable for people to blame the thing rather than the person.

  • InitialLastName a day ago

    More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.

    • jart a day ago

      Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.

      • collingreen 9 hours ago

        This is fine statement on its own but a gross reply to the parent.

  • PeeMcGee 19 hours ago

    I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.

    • friendzis 17 hours ago

      If you are in the news business you basically have to.

  • jplusequalt 9 hours ago

    Marketing has a powerful effect. Look at how the decrease in smoking coincided with the decrease in smoking advertisement (and now look at the uptick in vaping due to the marketing as a replacement for smoking).

    Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.

  • itishappy 12 hours ago

    I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.

  • [removed] a day ago
    [deleted]