weatherlite 3 days ago

> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.

2
klabb3 3 days ago

Bingo. The secret sauce is never a sustainable long term moat. Things leak, competitors copy, or they make even better things, employees switch jobs. AI looks less vulnerable, since it’s academically difficult and expertise is still limited. But time and time again, new secret sauce ingredients last for months at a maximum, before they are exceeded oftentimes by hobbyists or other small actors.

I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.

The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.

If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.

Topfi 3 days ago

In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.

If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.

Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…

ActorNightly 3 days ago

The funny thing is that nobody stops to ask if that version of AGI is even possible. For example, it would have to run a simulation of reality with enough accuracy faster than reality happens, and also multiple simulations in parallel. For physical world interaction, this is doable, but when it comes to predicting human behavior with enough accuracy, its probably not.

So even if we manage to come up with an algorithm that self learns, self expands, and so on, it likely won't be without major flaws.