Aurornis 8 days ago

> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

6
karaterobot 8 days ago

> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

potato3732842 7 days ago

Having a surface level understanding of what you're looking at is a huge part of OSINT.

These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.

jerf 7 days ago

At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.

The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.

[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law

potato3732842 7 days ago

What you're describing for Reddit is farcically charitable except in cases where you could just google it yourself. What you're describing for the LLM is what Reddit does when any judgement is involved.

I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.

On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.

But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.

iszomer 7 days ago

That is why it is ideal to ask it double-sided questions to test its biases as well as your own. Simply googling it is not enough when most people don't think to customize their search anyway, compounded by the fact that indexed sources may have changed or have been deprecated over time.

low_tech_love 8 days ago

The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.

raducu 5 days ago

> people who outsource their thinking to LLMs.

OSINT I immagine would be kind of useless to analize with LLMs because the kind of information you're interested in is very new so not enough sources for the LLMs to regurgitate.

As an example -- I read some defence articles about Romania operating in the future 70 F-16 and it immediately caught my eye because I was expecting in the 40s range. Apparently the Nerherlands will leave those 18 F-16s to Romania -- but I'm not that curious to dig into enough -- I was expecting those would go to Ukraine.

So just for fun I asked the question -- to Gemini 2.5 and Chat gpt -- "How many F-16s will Romania eventually operate" -- they all regurgitated the 40s number. I explicitly asked Gemini about the 18 F-16s from the Nerherlands and it kept its number estimate, saying those are for training purposes.

Only after I explicitly explained it my own knowledge did Gemini google it and confirm it.

Or I asked about the tethered FPVs in Ukraine and it told me those have very little impact. Only after I explicitly mentioned the recent russian successful Kursk counter-offensive did it acknowledge them.

torginus 8 days ago

And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.

sirspacey 4 days ago

I’ll be one to raise my hand and say this has been dramatically not the case for anyone I’ve introduced AI to or myself.

Significantly more informed and reasoned.

jart 8 days ago

Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

It's always more comfortable for people to blame the thing rather than the person.

InitialLastName 8 days ago

More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.

jart 8 days ago

Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.

ZYbCRq22HbJ2y7 8 days ago

Ah yeah, fentanyl drug adulterers, what great benefactors of society.

Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.

https://en.wikipedia.org/wiki/Common_good

jart 8 days ago

No one desires adulterated fentanyl.

ZYbCRq22HbJ2y7 8 days ago

No one has desire for adulteration, but they have a desire for an opiate high, and are willing to accept adulteration as a side effect.

You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.

harperlee 7 days ago

Fentanyl adulterators, market creators and resellers certainly do, for higher margin selling and/or increased volume.

potato3732842 7 days ago

The traffickers looking to pack more punch into each shipment that the government fails to intercept do.

Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.

collingreen 7 days ago

This is fine statement on its own but a gross reply to the parent.

isaacremuant 7 days ago

Worse than enterprising humans are authoritarian humans who want to tell others how they should live, usually also exempting themselves from their rules.

They also prey on the weaknesses of humans and social appearances to do things for a "greater good".

There's a problem and we 'must do something' and if you're against doing the something I propose youre evil and I'll label you.

The real mindfuck is that sometimes, an unscrupulous entrepreneur only has to play your "societal harm fighting" game through politicians and they get their way and we lose.

PeeMcGee 8 days ago

I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.

friendzis 8 days ago

If you are in the news business you basically have to.

itishappy 7 days ago

I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.

jplusequalt 7 days ago

Marketing has a powerful effect. Look at how the decrease in smoking coincided with the decrease in smoking advertisement (and now look at the uptick in vaping due to the marketing as a replacement for smoking).

Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.