“The Revolution Will Not be Supervised” Promises Facebook’s Yann LeCun in Kickoff AI Seminar
New developments in artificial intelligence (AI), particularly in artificial neural networks, are innovating fields as diverse as financial analytics, health care, and automobiles.
In a presentation at NYU Tandon, Yann LeCun, director of AI research at Facebook and a member of the New York University faculty, offered a review of the history of AI and where it’s headed. For those who fear we may soon be supplanted by thinking machines, take heart: LeCun kicked off the lecture by pointing out that the learning power of artificial neural networks is still pretty limited.
The inaugural speaker in a new seminar series, Modern Artificial Intelligence, organized by Professor Anna Choromanska and hosted by NYU Tandon’s Department of Electrical and Computer Engineering, LeCun began by addressing a common misconception that intelligent machines are just around the corner. Founding director of the Center for Data Science, a Silver Professor of Computer Science at the Courant Institute of Mathematical Sciences, professor of electrical and computer engineering at the Tandon School of Engineering, and an associate faculty member of the Center for Neural Science, LeCun noted that machines simply cannot learn as efficiently as brains.
“It’s important to understand that for all of these AI systems we see today — the fact is, none of them is real AI,” he said. “None of them can match the ability of biological systems. We are very far from making machines intelligent.”
In his presentation, LeCun delved into the history of neural networks, beginning with optimization and back-propagation algorithms and zeroing in on convolutional neural networks, or ConvNets, a key innovation that he said will drive advances in AI for fields like self-driving cars, medical imaging, and bioinformatics.
LeCun, who introduced the concept of convoluted networks in a 1989 paper that has been cited over 6,000 times, explained how the broad concept of neural networks initially elicited skepticism among those who could not see how computers of the time could make such a concept practicable. While computers back then could not, vast increases in computational power over the past two decades and the availability of big data through the Internet are delivering profound results.
“The evolution of ConvNets has led to great improvements in image recognition rates, for instance,” he said. “Before ConvNets the error rate was as high as 35 percent. Now it is below three percent, which is actually superhuman.”
LeCun pointed out that Facebook has also made its own technical advances, including a computer vision technology called Mask R-CNN, by which neural networks create a kind of bas-relief overlay of large images, allowing a system to recognize and segment large individual objects in a vision field, such as groups of people or cars, rather than just individual pixels.
Facebook also adopted ConvNets for use in a language translation platform called ConvSeq2Seq. The system demonstrates how neural networks can facilitate advances in robotics, computer vision, speech recognition and more.
“Something designed for perception of vison can be used to produce language sequences, as text has properties in common with images,” LeCun explained.
He noted that the future of AI will be driven by advances in reinforcement learning, as well as supervised and unsupervised machine learning. The latter, which allows neural networks to autonomously cluster unlabeled sets of data into groups, will be the linchpin, a point he made by reworking the title of a well-known song by poet Gil-Scott Heron: “The Revolution Will Not be Supervised.”
For example, displaying slides of children at different ages responding to an image of a toy car that appears to be floating in the air, LeCun argued that unsupervised learning, which he called “the dark matter of AI,” will be critical if scientists are to successfully imbue machines with something that even a 6-month old baby has: common sense.
“If you show a four-month-old baby that image of a floating car, the baby says ‘that’s the way the world works.’ But after six or eight months of unsupervised learning they understand already that it is impossible for objects like toy cars to float in air. And they are amazed; they understand it can’t happen.”
Finally, LeCun discussed new models and techniques, such as generative adversarial nets (GAN), which can precisely learn the distribution of data, and a new model called CapsuleNet, initially proposed by Google scientist Geoffrey Hinton. And he made it clear that, while AI systems aren’t as smart as cats or infants, and there are many obstacles still to surmount, the future of AI is wide open.
Upcoming seminars in the series will feature: