anon291 11 hours ago

> Even using terms like neurons and synapses. With those claims getting widespread, people also start building theories on top of them that make AI’s more like humans.

Except the networks studied here for prospective configuration are ... neural networks. No changes to the architecture have been proposed, only a new learning algorithm.

If anything, this article lends credence to the idea that ANNs do -- at some level -- simulate the same kind of thing that goes on in the brain. That is to say that the article posits that some set of weights would replicate the brain pretty closely. The issue is how to find those weights. Backprop is one of many known -- and used -- algorithms . It is liked because the mechanism is well understood (function minimization using calculus). There have been many other ways suggested to train ANNs (genetic algorithms, annealing, etc). This one suggests an energy based approach, which is also not novel.

1
nickpsecurity 4 hours ago

"Except the networks studied here for prospective configuration are ... neural networks. No changes to the architecture have been proposed, only a new learning algorithm."

In scientific investigations, it's best to look at one component, or feature, at a time. It's also common to put the feature in an existing architecture to assess the difference that feature makes in isolation. Many papers trying to imitate brain architecture only use one feature in the study. I've seen them try stateful neurons, spiking, sparsity, Hebbian learning, hippocampus-like memory, etc. Others will study combinations of such things.

So, the field looks at brain-inspired changes to common ML, specific components that closely follow brain design (software or hardware), and whole architectures imitating brain principles with artificial deviations. And everything in between. :)

anon291 46 minutes ago

I'm not sure what you're trying to say here. Hebbian learning is the basis for current ANNs. Spiking neural nets again an adaptation of neural nets. The entire field is inspired by nature and has been an a never ending quest to replicate it

This paper is an incremental step along that path but commenters here are acting as if it's a polemic against neural nets.