AI/DEEP LEARNING

Artificial Intelligence Bigwigs Weigh In on the Next Big Developments in AI in 2021



Hint:  It’s more than just OpenAI and GPT-3.

What do you get when the big luminaries converge at Montreal.AI for a pow-wow on the future of AI?  Answer:  a lot of information that someone needs to translate for people without code in their veins or a PhD in cognitive psychology.
 
If you want the unabridged version, you can check out the 3 hour 49 minute Zoom call replay that is available on Montreal.AI’s Facebook page.  There are 16 speakers from the top of the artificial intelligence/deep learning food chain on the call replay.
 
The big question they are addressing:  are big data and deep learning enough to get to artificial general intelligence?
 
Here is a summary of the answers given as well as who gave them, based on reporting in Venture Beat:
 
1.  Hybrid AI
Gary Marcus, a preeminent computer and cognitive scientist
His idea:  He proposed a hybrid approach that combines learning algorithms with rules-based software.  Rule-based software is basically if-then logic.  If you do this, you get that.  It’s the opposite of what deep learning does which is attempt to go from specific to general based on the training models of data that you feed the AI/deep learning model.
 
2.  Evolutionary theory as inspiration
Image classification and computer vision have driven the advances in deep learning of the past decade.  But more is needed; human intelligence comes from perception and interaction with the real world. 
 
OpenAI researcher Ken Stanley
His ideas:

  • “There is a fundamentally critical loop between perception and actuation that drives learning, understanding, planning, and reasoning. And this loop can be better realized when our AI agent can be embodied, can dial between explorative and exploitative actions, is multi-modal, multi-task, generalizable, and oftentimes social,”
  •  “There are properties of evolution in nature that are just so profoundly powerful and are not explained algorithmically yet because we cannot create phenomena like what has been created in nature,” Stanley said. 

3.  Reinforcement learning
Reinforcement learning is the training of machine learning models to make a sequence of decisions for a given scenario.
 
Richard Sutton, DeepMind and pioneer of reinforcement learning
His ideas:

  • Sutton and DeepMind, the AI lab where he works, is deeply invested in “deep reinforcement learning,” a variation of the technique that integrates neural networks into basic reinforcement learning techniques. This is the variation of deep learning that DeepMind used to master games such as Go, chess, and StarCraft 2.
     
  • “Reinforcement learning is the first computational theory of intelligence.  Reinforcement learning is explicit about the goal, about the whats and the whys. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To this end, the agent has to compute a policy, a value function, and a generative model,” Sutton said. 

In deep reinforcement learning, agents are given the basic rules of an environment and left to discover ways to maximize their reward.
 
4.  Common sense and integrating general knowledge
Mark Twain’s refrain about common sense is doubly true about AI and deep learning models. 
 
An example
Here is a common example (see what we did there?) of how dumb even sophisticated AI sometimes seems:
 
A state-of-the-art model when asked to create a sentence by using the words "dog, frisbee, throw, catch" came up with "Two dogs are throwing frisbees at each other."

The model was fed the following information: "a person throws a frisbee and a dog catches it," but can't get it right without common sense understanding about the real world.  In normal conversation, you don’t have to explain everything, ie. that dogs can’t throw frisbees, but AI is not there yet. 

University of Washington professor Yejin Choi 
Her idea:  “We know how to solve a dataset without solving the underlying task with deep learning today,” Choi said. “That’s due to the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the fundamental missing pieces.”

 
GPT-3 alone cannot be the future
NLP and AI model landscape will evolve.  For now, OpenAI is top of mind as many new startups are being built on the strength of the GPT-3 model.


Easy peasy to share this story with your peeps

Level up your inbox with The Scroll

Get stories like this delivered to your inbox.

Business news focused on startups and tech. Get informed while being very mildly entertained.
No spam. No fluff. No nonsense. Ever.