Comment by jruohonen
"""
• Instead of forming hypotheses, users asked the AI for ideas.
• Instead of validating sources, they assumed the AI had already done so.
• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.
This isn’t hypothetical. This is happening now, in real-world workflows.
"""
Amen, and OSINT is hardly unique in this respect.
And implicitly related, philosophically:
>This isn’t hypothetical. This is happening now, in real-world workflows.
Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.
Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output