One (bad) pet theory I have is that LLMs/AIs are going to uncover something very uncomfortable to us: The difference in intelligence between people is a lot bigger than we thought. In that someone with an IQ of 95 and and IQ of 105 [0] have very different views of the world and very different abilities to navigate that world. Like, some people are much dumber than we thought they were and some people are much smarter. Not sure what the downstream effects of such a theory might be, but I don't like the things I can think up.
Again, a (bad) pet theory.
[0] Yes, IQ is not a good measure of blah blah blah. I'm just using this a handle to explain things, I don't mean it literally.
I think we're gonna find that there are different ways to quantify "humanness" other than IQ. Someone with an IQ of 95 might seem "more real" than an LLM with a computed IQ of 145.
EQ is a much better test at what makes us "human" than IQ. The only reason we don't give it credit is that it makes us even more uncomfortable than IQ.
I mean, yeah. IQ is a bad measure (if self-consistent). Training trumps all, like with every task. The more we do something, the better we'er going to be at it.
The thing that is going to be interesting is now that we have essentially cheap, ethically clear, and realistic digital 'people', what are the experiments that we can do with them and what can we uncover? I'm a little flat-footed even as to the questions that we can ask them now. At the very least, we can use them to 'dry-run' surveys and experiments and have better data collection and stress-testing. Like, you can now generate realistic data now and use that to run the stats while the real surveys are coming in.
Even if your claim is true, how would LLMs/AI lead to uncovering this? I don’t see why they are related, except very tangentially.
I mean they said it was a bad theory.
More seriously, it seems to be essentially the idea that “surpassing human intelligence” is not the binary outcome many thought it would be, and that much of what passes for human intelligence interpersonally could be imitation of intelligence.
Yeah, the impetus comes from the Ashley Madison hacks.
Like, you had thousands of men paying real money to chat with (terrible) bots. To me, that was the passing of the Turing Test. But I know of nearly no person that could possibly fall for that scam. Even family members deep in dementia knew it was a joke. Yet Ashely Madison made a ton of cash.
That, to me, was puzzling. How could it happen that people that are that foolish would be able to hold a job or pay taxes? It made no sense.
So, the (bad) pet theory that I eventually came up with is that human intelligence is a lot wider than we think it is.
Maybe you've discovered that learning pays compound interest.
David Epstien talks about this in Range.
Essentially, we have 'kind' and 'unkind' learning environments.
To be successful in a Kind environment, you drill-and-kill. The feedback is near instant and the ranking is clear. These are things like golf, classical music, and chess.
To be successful in an Unkind environment, you learn as much as you can. The feedback is infrequent and the ranking is murky. These are things like tennis, jazz, and business.
I'd think that the compounding interest only plays in the Unkind environments, as you can make new connections on the new data you've got going in. In the Kind environment, new data doesn't make a difference as you're just trying to be perfect at the thing you're focusing on; if anything it's an impediment.