Topfi 3 days ago

In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.

If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.

Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…

1
ActorNightly 3 days ago

The funny thing is that nobody stops to ask if that version of AGI is even possible. For example, it would have to run a simulation of reality with enough accuracy faster than reality happens, and also multiple simulations in parallel. For physical world interaction, this is doable, but when it comes to predicting human behavior with enough accuracy, its probably not.

So even if we manage to come up with an algorithm that self learns, self expands, and so on, it likely won't be without major flaws.