We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Stephen Grossberg • 3 years ago

The mathematical foundations of Deep Learning are contradicted by hundreds of neurobiological experiments. In addition, the algorithm has explained essentially no psychological or neurobiological data. Biological neural models have been developed over the past 40 years that have explained large numbers of psychological and neurobiological experiments, and that have made many successful experimental predictions. These models have also been used in many large-scale applications in engineering and technology, especially those that require autonomous adaptive intelligence. See https://www.frontiersin.org... for more about this topic.

Torbjörn Larsson • 3 years ago

Interesting. FWIW, I note the order problem in the criticism and found it in the paper:

It has been noted that results of Fuzzy ART and ART 1 (i.e., the learnt categories) depend critically upon the order in which the training data are processed. The effect can be reduced to some extent by using a slower learning rate, but is present regardless of the size of the input data set. Hence Fuzzy ART and ART 1 estimates do not possess the statistical property of consistency.[15] This problem can be considered as a side effect of the respective mechanisms ensuring stable learning in both networks.

[ https://en.wikipedia.org/wi... ]

Good luck!

Stephen Grossberg • 3 years ago

Thanks for mentioning effects of training order during fast or slow learning. Here are several facts: ART predictive accuracy is high, and essentially the same, using fast or slow learning, and memory is stable at both rates. As more trials occur, statistical consistency is approached. Fast learning means that the error in response to every input is zeroed. If this is done with back propagation, memory is wiped out. Fast learning enables one-trial learning in which each item is presented just once. This is impossible for back propagation. ART algorithms pay attention to predictively important information. Back propagation does not. See Section 17 of http://sites.bu.edu/steveg/... for 17 properties of ART that back propagation does not have. As I earlier noted, ART has explained and successfully predicted many psychological and neurobiological data. Back propagation has not. For example, see https://www.sciencedirect.c... for how ART explains how we consciously see, hear, feel, and know about objects and events in a changing world that is filled with unexpected events.

hquain • 3 years ago

The modal auxiliaries -- "may, might, could,..." -- get a real workout in this article. Solid results are harder to spot. May I suggest a more analytical approach?

The various disciplines and subdisciplines differ radically in their attitudes toward triumphalism. AI has always been just about to solve everything since the 1950's. Along with terrific accomplishments, distortions and suppressions that are just as terrific are the result. For example, there is only the slightest hint in this article about the immense computational burden imposed by 'neural' nets, and how it discharged in the modern era -- by access to immense computational resources.

“And evolution is pretty damn awesome. Backpropagation is useful. I presume that evolution kind of gets us there.” Does this kind of guff have any place in scientific reporting?

Jim Cross • 3 years ago

Baars, Grossberg, and others have pointed out the role of consciousness in any sort of complex learning. Grossberg almost equates learning and consciousness and his description matches what this article is discussing.

"The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states and that these resonant states trigger learning of sensory and cognitive representations."

So the missing piece seems to be consciousness itself or exactly how it works.

Stephen Grossberg • 3 years ago

Thanks for noting that the authors seem not to be aware that many of their aspirations have been realized years ago through the work that I, with many gifted collaborators, have carried out during the past 40 years. See sites.bu.edu/steveg for downloadable lectures and articles that illustrate this progress. One problem with their models is that their foundational hypotheses are incompatible with thousands of known experimental facts about how our brains make our minds. A second problem is that the authors' models do not explain any of these facts. In science, the theories that warrant attention are the ones that successfully explain and predict the most facts in a principled and testable way. The theories that I have been lucky enough to develop over the past 40 years past that test. Stephen Grossberg

Ross Presser • 3 years ago

It seems ridiculous to say neurons can only receive local signals. Yes, they are only connected locally via synapses to a few neurons. But like the last few paragraphs imply, there are neurotransmitters that circulate in the blood that can carry signals to ANY neuron from anywhere. Attention may be only one of many mechanisms that use non-synaptic paths to carry feedback.

Jim Cross • 3 years ago

Neurons mostly interact locally in assemblies with some of the neurons of the assembly connecting to more remote neurons.

Juris Dzelme • 3 years ago

The backpropagation in brain acts by neurotransmitters, that slowly circulate in the blood and can carry signals to ANY neuron from anywhere, and by electrical and magnetic fields, which act fast and can carry signals also to ANY neuron.

Torbjörn Larsson • 3 years ago

The generalized neural net models may show computational neuroscientists some ways to learn, but we know from biological studies that animals have evolved specialized neural nets from sensory input over central nervous system to actuator output. Therefore the last part on brain architecture inspired studies could show promise.

if we go for the latter, it seems a simile for how human brains learn has been elucidated a while back by adopting a brain inspired architecture [ http://develintel.blogspot.... ].

In this article from the Proceedings of the National Academy, Rougier et al. demonstrate how a specific network architecture - modeled loosely on what is known about dopaminergic projections from the ventral tegmental area and the basal ganglia to prefrontal cortex - can capture both generalization and symbol-like processing, simply by incorporating biologically-plausible simulations of neural computation.

After this training, the prefrontal layer had developed peculiar sensitivities to the output. In particular, it had developed abstract representations of feature dimensions, such that each unit in the PFC seemed to code for an entire set of stimulus dimensions, such as "shape," or "color." This is the first time (to my knowledge) that such abstract, symbol-like representations have been observed to self-organize within a neural network.

Furthermore, this network also showed powerful generalization ability. If the network was provided with novel stimuli after training - i.e., stimuli that had particular conjunctions of features that had not been part of the training set - it could nonetheless deal with them correctly.
sntrxt • 3 years ago

Very informative article. One small suggestion: it would be much appreciated if the author could provide some citations. I would have liked to have a look at the corresponding articles by Blake Richards and the NeurIPS paper by Roelfsema et al. without having to search for them. Thanks.

rjurney • 3 years ago

Neurons make me happy!

fb36 • 3 years ago

What if, what is most important to realize about brain is still unknown/unproven?:

Consider how each & every moment our brains must be (instantly) choosing a thought/memory among (surely) an astronomical number of options!
Just like a quantum computer instantly could select a certain solution among an astronomical number of possible solutions!

Also, IMHO, a mind/consciousness is "simply" brain machinery controlled/commanded by free will & that is why a true human-like AI is impossible, since free will cannot be created artificially!

Walter • 3 years ago

They're on the right track. However, dopamine only acts in some brain regions. If they integrate norepinephrine into their model, they'll get a lot closer to the truth.

eastcastle • 3 years ago

Drawn to this article thinking the content is related to quantum physics which is unfortunately not (without reading through it though, only by key word searching to find no Quantum or Quanta). Eager to learn whether and how quantum principles function in animal physiology).

Torbjörn Larsson • 3 years ago

From the "About Quanta" link:


Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.”

Our reporters focus on developments in mathematics, theoretical physics, theoretical computer science and the basic life sciences.

There is very little known about quantum principles in biology unless you count classical chemistry as such. But FWIW, there is an area called quantum chemistry that explores some reactions that classical chemistry doesn't describe [ https://en.wikipedia.org/wi... ]. Those may or may not be part of physiology.

PB • 3 years ago

Great article! Love to hear about stuff like this.

JohnTArmstrong • 3 years ago

I believe that the premise, "Any biologically plausible learning rule also needs to abide by the limitation that neurons can access information only from neighboring neurons; backprop may require information from more remote neurons" is false.
I attended a meeting of Mensa in Washington DC around 1990 where the meeting subject was extra sensory perception. A number of scientists were presenting evidence-based research indicating that one of the interesting features of measurable extra sensory perception involved prediction of the near future. If in fact the brain or human or animal is capable of predicting the near future in some reasonable way, then back propagation works.
While I was a bit skeptical, many of the experiments presented were pretty convincing. And I could see a biological reason why we would have developed this capability for self-preservation.
As a footnote, the concept extra sensory perception is really kind of like unidentified aerial phenomena. It doesn't say anything except that we don't understand what it is we're observing.