So, it seems that this time around, Artificial Intelligence (AI), now known as Machine Learning (ML), might actually go somewhere. In fact, it already is.
First, there is IBM’s Watson which appears to converse with cool people on Superbowl spots. Then there is the fact that results to your Internet searches are getting more fluid and more, well, human. Financial institutions are using the ability that computers have to sift through very large amounts of data and detect patterns that indicate suspicious activity to raise their fraud detection scores.
A digital health startup that I know uses ML to look at vast troves of patient data and predict, very accurately, which patients in the ICU are likely to get into real trouble - i.e. close to death - 24 hours in advance. Health care professionals find this at once helpful - what patients should I focus on - and frustrating - the software cannot say anything about why the patient is about to tank, just that she is likely to. Humans love to know the “why” not just the “what”; causality is hardwired into our brains.
Even Detroit, not known to be the most innovative spot in the galaxy, seems to be taking the idea that cars may become autonomous, or at least head that direction, seriously. This will require large amounts of ML.
Now AI has been around a very long time and, as technology hype cycles go, we’ve been in the trough for a long time. It came and went in the ‘70s, ‘80s and 90s, usually fueled by Federal research funding.
Its breakthrough this time is due to a combination of three things. First, the radical drop in the cost of computing power has made what researchers did on mainframes over weeks four decades ago possible on a laptop in a few seconds, at a fraction of the cost.
Second the Internet has provided developers and users access to very, very, very large amounts of data, which can be used to “train” an ML program (more on this in a bit). Access to large amounts of data, coupled with the third factor, helped drive ML forward.
The third factor is that researchers have improved the models they use, leaning on a combination of old school statistical techniques (e.g. Bayesian and Gaussian theory) and new ones (e.g. neural networks.)
There is a fourth factor, the one that inevitably sets in when the hype around a new technology dies down and that is folks’ expectations for ML are much more modest than they used to be. The fact is that ML can sift through the first 80% of a large amount of data to understand it well; the last 20% is nearly impossible and much better left to humans. Researchers understand this now.
Broadly speaking, ML works in four ways: 1. Supervised learning; 2. Unsupervised learning; 3. Reinforcement learning; and 4. Deep learning.
In Supervised learning, the computer is “trained” with data sets that include the desired outcome from the analysis (e.g. a customer with a particular financial profile should get this credit score).
In Unsupervised learning, the computer is not fed historical labels and rather is used to explore patterns in the data set being examined. When you are on a consumer website, the “Recommended for you” listing may have been organized using Unsupervised learning.
In reinforcement learning, the computer acts an agent, operating in an environment (what it is learning about) with a set number of actions. This method is often used with robotics where the machine has to learn the most efficient way of accomplishing a task.
Deep learning uses neural nets to process very large amounts of data - e.g. facial recognition - to identify patterns.
The last 20% that I mentioned a couple of paragraphs ago, should give those who are concerned that, if machines get smart (and autonomous), there will be nothing left for us to do. Indeed, in a recent article, McKinsey pointed out that about 45% of our GDP could be automated today and that even about 20% of the tasks of highly skilled individuals (CEOs, surgeons and such) are clerical. But that last 20% is what will keep humans in business and gainfully employed. How wonderful it would be to be freed of the clerical tasks that take up a chunk of my workday.
So, Siri may want to talk to you but you needn’t worry: you’re not going anywhere soon.