nickpsecurity 11 hours ago

Some are surprised that anyone would make this point, either the title or the research.

It might be a response to the many, many claims in articles that neural networks work like the brain. Even using terms like neurons and synapses. With those claims getting widespread, people also start building theories on top of them that make AI’s more like humans. Then, we won’t need humans or they’ll be extinct or something.

Many of us whom are tired of that are both countering it and just using different terms for each where possible. So, I’m calling the AI’s models, saying model training instead of learning, and finding and acting on patterns in data. Even laypeople seem to understand these terms with less confusion about them being just like brains.

2
skissane 11 hours ago

> It might be a response to the many, many claims in articles that neural networks work like the brain. Even using terms like neurons and synapses.

Artificial neural networks originated as simplified models of how the brain actually works. So they really do "work like the brain" in the sense of taking inspiration from certain rudiments of its workings. The problem is "like" can mean anything from "almost the same as" to "in a vaguely resembling or reminiscent way". The claim that artificial neural networks "work like the brain" is false under the first reading of "like" but true under the second.

utopicwork 8 hours ago

No? They work like how people assumed the brain actually works. We still don't understand how the brain works. You're too early to even make this claim

nickpsecurity 7 hours ago

Brain-inspired, neuromorphic architectures are usually very different from neural networks in machine learning. They’re so different (and better) that people who know both keep trying to reproduce brain-like architecture to gain its benefits.

One of my favorite features is how they use local, likely Hebbian, learning instead of global with backpropagation. (I won’t rule out some global mechanism, though.) The local learning makes their training much more efficient. Even if a global mechanism exists (eg during sleep?), brain architectures could run through more training data faster and cheaper. Expensive step just tidies it up in shorter periods of time.

They are also more analog, parallel, sparse, and flexible. They have feedback loops (IIRC). Multiple tiers of memory integrated with their internal representation with hallucination mitigation. They also have many specialized components that automatically coordinate to do the work without being externally trained to. All in around 100 watts.

Brains are both different from and vastly superior to ANN’s. Similarities do exist, though. They both have cells, connections, and change connections based on incoming data. Quite abstract. Past that, I’m not sure what other similarities they have. Some non-brain-inspired ANN’s have memory in some form but I don’t know if it’s as effective and integrated as the brain’s yet.

jmchambers 1 hour ago

Totally agree! The "fire together, wire together" approach to training weights is super easy to parallelize, and you can design custom silicon to make it ridiculously efficient. Back when I was a Computational Neuroscience (CN) researcher, I worked with a team in Manchester that was exploring exactly that—not sure if they ever nailed it...

Funny enough, I actually worked with Rafal Bogacz, the last-named author of the paper we’re discussing, during his Basal Ganglia (BG) phase. He’s an incredibly sharp guy and made a pretty compelling argument that the BG implement the multihypothesis sequential probability ratio test (MSPRT) to decide between competing action plans in an optimal way.

Back then, there was another popular theory that the BG used an actor-critic learning model—also quite convincing.

But here’s the rub: in CN, the trend is to take algorithms from computer science and statistics and map them onto biology. What’s far rarer is extracting new ML algorithms from the biology itself.

I got into CN because I thought the only way we’d ever crack AGI was by unlocking the secrets of the best example we’ve got—the mammalian brain. Unfortunately, I ended up frustrated with the biology-led approach. In ten years in the field, I didn’t see anything that really felt like progress toward AGI. CN just moves so much slower than mainstream ML!

Still, I hope Rafal’s onto something with this latest idea. Fingers crossed it gives ML researchers a shiny new algorithm to play with.

anon291 11 hours ago

> Even using terms like neurons and synapses. With those claims getting widespread, people also start building theories on top of them that make AI’s more like humans.

Except the networks studied here for prospective configuration are ... neural networks. No changes to the architecture have been proposed, only a new learning algorithm.

If anything, this article lends credence to the idea that ANNs do -- at some level -- simulate the same kind of thing that goes on in the brain. That is to say that the article posits that some set of weights would replicate the brain pretty closely. The issue is how to find those weights. Backprop is one of many known -- and used -- algorithms . It is liked because the mechanism is well understood (function minimization using calculus). There have been many other ways suggested to train ANNs (genetic algorithms, annealing, etc). This one suggests an energy based approach, which is also not novel.

nickpsecurity 4 hours ago

"Except the networks studied here for prospective configuration are ... neural networks. No changes to the architecture have been proposed, only a new learning algorithm."

In scientific investigations, it's best to look at one component, or feature, at a time. It's also common to put the feature in an existing architecture to assess the difference that feature makes in isolation. Many papers trying to imitate brain architecture only use one feature in the study. I've seen them try stateful neurons, spiking, sparsity, Hebbian learning, hippocampus-like memory, etc. Others will study combinations of such things.

So, the field looks at brain-inspired changes to common ML, specific components that closely follow brain design (software or hardware), and whole architectures imitating brain principles with artificial deviations. And everything in between. :)

anon291 47 minutes ago

I'm not sure what you're trying to say here. Hebbian learning is the basis for current ANNs. Spiking neural nets again an adaptation of neural nets. The entire field is inspired by nature and has been an a never ending quest to replicate it

This paper is an incremental step along that path but commenters here are acting as if it's a polemic against neural nets.