Every now and then I read an academic article that just fizzes with scientific sass. It’s often written as a riposte to proponents of a rival theory, part of a scientific dialogue between groups who vehemently believe that their theory reigns supreme. Insults are slung, swaddled in jargon but still designed to sting.
My story this week was partly inspired by just such an
article,
written by none other than Francis Crick—the Nobel laureate who co-discovered the structure of DNA—and published in Nature, a science journal, in 1989. By this point in his career Crick had left molecular biology research behind and turned his attention to neuroscience.
The paper had the evergreen title “The recent excitement about neural networks”, and among the researchers it targeted was one Geoffrey Hinton, known today as the godfather of artificial intelligence. By the late 1980s Dr Hinton and other scientists had shown that small artificial neural networks could be taught to perform basic tasks, laying the groundwork for the algorithms we use today to power chatbots and self-driving cars.
Crick’s issue was not with the abilities of the models. Indeed, he conceded that “the results that can be achieved with such simple nets are astonishing.” His concern was that some scientists thought these networks could serve as models of the real brain. In his view, the artificial neural networks made by Hinton and others were biologically “unrealistic in almost every respect”. What’s more, he added, “most of these neural ‘models’ are not therefore really models at all because they do not correspond sufficiently closely to the real thing.”
Heated words. Let them simmer away, though, and what remains is an objection to the mathematical algorithm known as backpropagation that was used to train the models, and help them learn from their mistakes. The assumptions necessary for backpropagation to work just didn’t hold in real brains, said Crick, who thought the brain would need as-yet-undiscovered, specialised neurons to be able to carry out the algorithm.
His forceful pronouncements mostly silenced would-be dissenters. But in the intervening 35 years neuroscientists have come no closer to figuring out how real brains learn. At the same time, artificial neural networks trained using backpropagation have gone from strength to strength, rivalling—and indeed surpassing—humans in a growing number of fields. This has led a new generation of researchers to re-examine whether something like backpropagation could be at work in the brain after all.
As I report,
a host of tweaked versions of backpropagation have been proposed which look more biologically realistic and, most importantly, counter some of Crick’s critiques. The scientific sass continues.
Elsewhere in The Economist:
Thank you for reading. And a special mention to those of you who responded to last week’s newsletter about terraforming Mars. My colleague Tim Cross should be in touch with some of you whose questions piqued his interest. It was interesting to see how many of you ridiculed the prospect of humanity’s Martian future.
George,
in Washington, did not hold back, describing the possibility as “utterly delusional” and as “pipe dreams that will forever remain in the realm of science fiction.”
Dan,
another reader, was sceptical we could ever live on such poor quality terrain and under such high levels of radiation.
But it’s not all bad news. Indeed, a new study has suggested that even more liquid water than we thought might be lurking beneath the surface of the red planet. Read more about that
here.
As always, if you have more questions or feedback for us, reach us at
sciencenewsletter@economist.com.
|