thomasahle 2 days ago

> What the hell is general intelligence anyway?

OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."

Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...

So we can say AGI is "AI that can do the work of Organizations":

> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.

3
__MatrixMan__ 7 hours ago

That's a silly definition, even if somebody with a lot of money wrote it. Organizations can do more than individuals for the same reason that an M4 can do more than a Pentium 4--it's a difference in degree.

Generality is about differences in kind. Like how my drill press can do things that an M4 can't. How could you ever know that your kinds of intelligence are all of them?

TheOtherHobbes 2 days ago

There's nothing general about AI-as-CEO.

That's the opposite of generality. It may well be the opposite of intelligence.

An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.

Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.

AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.

LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.

They're barely in sight of human levels of achievement - not a high bar.

The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.

Thrymr 2 days ago

Apparently OpenAI now just defines it monetarily as "when we can make $100 billion from it." [0]

[0] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...

olyjohn 2 days ago

That's what "economically valuable work" means.