There is a revolution happening in analytics and Artificial Intelligence (AI) is at its center. My goal with this post is to catch you up on the 80-year history of AI by the time I finish my coffee. Fair warning, I do take longer than some!
The Birth of AI: 1940s
The idea of non-human intelligence has been around since antiquity. However, serious research on Artificial Intelligence coincided roughly with the invention of the first programmable machines around 1940.
Around the same time, Alan Turing's work on the theory of computation showed that a machine could simulate almost any act of mathematical deduction just by shuffling ones and zeroes. If we presume human intelligence to be our ability to reason, it was suddenly and theoretically possible for a machine to mimic it.
The promise of this research was immediately captivating and expectations soared high.
The Double-Edged Sword of Lofty Claims
Lofty claims around a new technology are a double-edged sword. Funding pours into projects and companies, seemingly overnight. However, disappointment has a long memory and funding can dry up for years if expectations fall short. An AI winter can set in.
The First AI Winter: Cold War Era
There have been at least a couple such winters. The first one was during the Cold War. The US government wanted to translate Russian documents quickly. Instead, researchers quickly discovered that commonsense was not common or easy among machines.
A story goes that efforts to translate "the spirit is good but the flesh is weak" from a Russian cable yielded "the vodka is good but the meat is rotten".
I think that our knowledge is grounded in our embodied experience. We experience reality with several senses and that shapes our conversation. Language is messy and constantly evolving. No wonder that the promise of AI did not translate at the time.
The Second AI Winter: The 1970s
The second AI winter in the seventies was a bit of an own goal. The AI community split between:
- Symbolic AI (rigid, top-down): Expert systems that attempted to encode knowledge explicitly
- Connectionist AI (flexible, bottom-up): Approaches using interconnecting artificial neurons
Symbolic AI won a battle but lost sight of the war. Expert systems were all the rage for a while. They were:
- Too hard to maintain
- Did not learn
- Were not fault-tolerant
Several years and millions of dollars later, they fell from grace by 1990.
Important advances were made in the theory of connectionist AI, however computer processing power was just not there to apply them.
The Quiet Years: 1990s-2000s
Research in AI plateaued during the 90's and 2000's. Some members of the community re-branded themselves as cognitive scientists, working in informatics, analytics, or even machine learning.
Watered-down (or narrower definitions of) AI seemed technically feasible and general AI considered a bit of a quack pursuit.
Quick Distinction: Narrow vs. General AI
Narrow AI is what is all around us today. Phone assistants like Siri, Google search engine, image detection, or self-driving cars—all examples of machines trained for specific (or narrowly defined) tasks.
General AI is a machine that can do everything including being sentient. We don't have one of those...yet.
The Thaw: Post-2010
Two things happened after 2010 that thawed the latest AI winter.
1. Convergence on Neural Network Architecture: Researchers converged upon a specific type of connectionist neural network architecture as the best candidate for learning patterns from data. This led to deep learning.
Remarkably, every example of AI you see today uses essentially the same algorithm! This type of architecture required massive processing power.
2. GPU Revolution: Available CPUs were struggling to keep up. Cut to 2013.
The NVIDIA Story
The stock of NVIDIA, a chip-maker for computer graphics, had been languishing in the low teens for years. The company noticed all these graduate students buying their GPUs (Graphical Processing Units). They realized that these students had not become hardcore gamers overnight, but that the GPU architecture lends itself well to deep learning.
A GPU can have orders of magnitude more cores than a CPU. And neural networks are massively parallelizable. It is a marriage made in binary heaven. You can take chunks of a neural network and distribute them across GPU cores in order to compute them simultaneously.
NVIDIA's stock went from $25 a share in 2015 to almost $300 by 9/2018.
Today: The Age of Narrow AI
Narrow AI surrounds us today. Even my thermostat tells me that it is learning. It sounds wonderful to have little AI helpers take over mundane tasks like:
- Programming the thermostat
- Organizing pictures
- Recommending which show to binge next
Such harmless fun. However, the likes of Bill Gates and Elon Musk have sounded off a warning alarm on the future potential of AI.
The Great AI Debate
One of my favorite exchanges involves the CEO of Facebook, Mark Zuckerberg, terming AI naysayers as "pretty irresponsible" in a casual BBQ video. Elon was quick to respond with "I've spoken to Mark about this. His understanding of the subject is limited".
Ouch.
Conclusion: A Different Time
I am almost through my coffee, and I'll save comments around the potential perils of general AI for a future post. I'll end by noting that this time it feels different.
Saying that we have only achieved narrow AI used to be a dig on the perceived unfulfilled promise of AI. Today, we recognize it as AI. In our pocket, in our home, and in our car.
Our relationship with deep learning is becoming personal and pervasive. And our increased reliance on machines whose operation we no longer understand should concern us.
For AI research and researchers, it does not feel like another winter is coming anytime soon.
Note: Any opinions are my own. This was originally posted on predictivemodeler.com.