palmotea 8 days ago

One way to achieve superhuman intelligence in AI is to make humans dumber.

6
ryao 8 days ago

This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal.

SoftTalker 8 days ago

The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI.

ryao 8 days ago

The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly.

xrd 8 days ago

If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on.

card_zero 8 days ago

Raises hand

https://news.ycombinator.com/item?id=43303755

I'm proud to see it evolving in the wild, this version is better. Or you know it could just be in the zeitgeist.

xrd 7 days ago

I'll never forget you, card_zero.

BrenBarn 8 days ago

What if ChatGPT came up with it?

palmotea 8 days ago

I don't use LLMs, because I don't want to let my biggest advantages atrophy.

MrMcCall 7 days ago

while gleefully watching the bandwagon fools repeatedly ice-pick themselves in the brain.

6510 8 days ago

I thought: A group working together poorly isn't smarter than the smartest person in that group.

But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.

trentlott 8 days ago

That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some.

6510 6 days ago

I understand but the bug is closed now.

jimmygrapes 8 days ago

anybody who's ever tried to play bar trivia with a team should recognize this

tengbretson 7 days ago

Being timid in bar trivia is the same as being wrong.

rightbyte 8 days ago

What do you mean? You can protest against bad but fast answers and check another box with the pen.

boringg 8 days ago

The cultural revolution approach to AI.

imoverclocked 8 days ago

That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.

yieldcrv 8 days ago

Right, superhuman would be relative to humans

but intelligence as a whole is based on a human ego of being intellectually superior

caseyy 8 days ago

That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism.

Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.

If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.

yieldcrv 8 days ago

Some things I think about are how different the goals could be

For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.

A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.

caseyy 8 days ago

This reminds me of the "universal building blocks of life" or the "standard model of biochemistry" I learned at school in the 90s. It held that all life requires water, carbon-based molecules, sunlight, and CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur).

Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight.

Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting.