"Except the networks studied here for prospective configuration are ... neural networks. No changes to the architecture have been proposed, only a new learning algorithm."
In scientific investigations, it's best to look at one component, or feature, at a time. It's also common to put the feature in an existing architecture to assess the difference that feature makes in isolation. Many papers trying to imitate brain architecture only use one feature in the study. I've seen them try stateful neurons, spiking, sparsity, Hebbian learning, hippocampus-like memory, etc. Others will study combinations of such things.
So, the field looks at brain-inspired changes to common ML, specific components that closely follow brain design (software or hardware), and whole architectures imitating brain principles with artificial deviations. And everything in between. :)
I'm not sure what you're trying to say here. Hebbian learning is the basis for current ANNs. Spiking neural nets again an adaptation of neural nets. The entire field is inspired by nature and has been an a never ending quest to replicate it
This paper is an incremental step along that path but commenters here are acting as if it's a polemic against neural nets.