Brain-inspired, neuromorphic architectures are usually very different from neural networks in machine learning. They’re so different (and better) that people who know both keep trying to reproduce brain-like architecture to gain its benefits.
One of my favorite features is how they use local, likely Hebbian, learning instead of global with backpropagation. (I won’t rule out some global mechanism, though.) The local learning makes their training much more efficient. Even if a global mechanism exists (eg during sleep?), brain architectures could run through more training data faster and cheaper. Expensive step just tidies it up in shorter periods of time.
They are also more analog, parallel, sparse, and flexible. They have feedback loops (IIRC). Multiple tiers of memory integrated with their internal representation with hallucination mitigation. They also have many specialized components that automatically coordinate to do the work without being externally trained to. All in around 100 watts.
Brains are both different from and vastly superior to ANN’s. Similarities do exist, though. They both have cells, connections, and change connections based on incoming data. Quite abstract. Past that, I’m not sure what other similarities they have. Some non-brain-inspired ANN’s have memory in some form but I don’t know if it’s as effective and integrated as the brain’s yet.
Totally agree! The "fire together, wire together" approach to training weights is super easy to parallelize, and you can design custom silicon to make it ridiculously efficient. Back when I was a Computational Neuroscience (CN) researcher, I worked with a team in Manchester that was exploring exactly that—not sure if they ever nailed it...
Funny enough, I actually worked with Rafal Bogacz, the last-named author of the paper we’re discussing, during his Basal Ganglia (BG) phase. He’s an incredibly sharp guy and made a pretty compelling argument that the BG implement the multihypothesis sequential probability ratio test (MSPRT) to decide between competing action plans in an optimal way.
Back then, there was another popular theory that the BG used an actor-critic learning model—also quite convincing.
But here’s the rub: in CN, the trend is to take algorithms from computer science and statistics and map them onto biology. What’s far rarer is extracting new ML algorithms from the biology itself.
I got into CN because I thought the only way we’d ever crack AGI was by unlocking the secrets of the best example we’ve got—the mammalian brain. Unfortunately, I ended up frustrated with the biology-led approach. In ten years in the field, I didn’t see anything that really felt like progress toward AGI. CN just moves so much slower than mainstream ML!
Still, I hope Rafal’s onto something with this latest idea. Fingers crossed it gives ML researchers a shiny new algorithm to play with.