Most of you know that I’m a learning consultant by trade and I apply the science of learning to real-world learning and performance improvement projects for my clients. You may also have noticed that one of my side interests is artificial and augmented intelligence. At least, I used to think that this was a side interest, only tenuously connected to my “day job,” until several different threads converged in my brain and got me thinking: What if the Singularity – meaning the emergence of a true “artificial” intelligence (AI) — has already happened and most of us just haven’t noticed?
Here’s a short chronology of how my perception started to shift from “this is kind of cool” to “this could change everything.” First I need to give you a bit of a disclaimer here: This story is not intended to be a detailed chronology of scientific developments in the fields discussed. It is merely a meta observation of how my unique brain changed the way it pays attention to these fields as they have developed over time.
Computer science started drawing from neuroscience (Instead of the other way around.)
In 2009, a group of scientists founded FACETS (Fast Analog Computing with Emergent Transient States). This initiative was formed to figure out how the brain solves problems and then build a computer that works the same way. You see, the brain and the traditional computer process information in very different ways. As the need for “Big Data” got well, bigger and bigger, scientists realized that we were going to come up against a physical limit to our available computing power unless a new way of computing emerged. That’s when computing scientists started to look at the human brain for inspiration.
Why build a “meat computer?”
In 2010 neuroscientists estimated that the human brain’s storage capacity was somewhere around 2.5 petabytes (or a million gigabytes). This estimate took into account our roughly 86 billion neurons and all the different interconnections possible for each neuron, making the brain by far the most complex computing machine ever “built.” But that was 2010. By 2016 scientists had discovered that the capacity of the brain was actually at least 10 times higher, once they took into account how individual neurons could be connected to each other in multiple ways to encode different memories. Once computer scientists realized that the human brain could be studied at a certain level as a computing network, we started hearing about “neural nets” being built in labs. In 2012, scientists at The Scripps Research Institute in California and the Technion–Israel Institute of Technology announced the development of a “biological computer” made entirely from biomolecules. This may not have been the first instance of such an accomplishment, but it was the first one I happened to note in my Twitter account.
The evolution of machine learning
The dream of building an artificial intelligence can be found in classic science fiction going back to at least 1909, but fiction started to become reality 1951, when Alan Turing proposed the idea of a machine that could believably pass as a human at least 50% of the time in controlled conversations with real humans. The “Turing Test” continues to be discussed and applied to modern AI, although many are starting to think we may need a new test to sync up with our latest advances in this field.
Inspired by Turing’s work, scientists started trying to build computers that could mimic some of the functions of the human brain, such as navigating a space without help, solving problems without the answer being programmed into memory, recognizing patterns and so on. For the next several decades, these scientists believed that the answer to this challenge was in finding the right algorithms to tell the computer exactly how to perform each complex task. Perhaps the greatest example of this approach is IBM’s Deep Blue, a super computer that beat a human grand master in chess in 1997. More recently, IBM’s Watson beat the best ever human contestants in the television quiz show, Jeopardy.
While these earlier accomplishments were based on writing the right algorithms to locate memory that had already been stored in super computers, more recent efforts at AI have turned to skipping the algorithms and just teaching computers how to recognize patterns. Then, by dumping a massive amount of data into the machine (or connecting it directly to the Internet) the thinking is that the machine will actually teach itself how to think. Over time, just like a child, the computer will grow more and more competent, as it incorporates feedback from the outside world and revises its internal models based on new information. Using this type of learning, a computer defeated an expert in the game of “Go” for the first time in 2016. The winning move, number 37, may be considered a famous turning point, as the machine “invented” a move that, according to Go Master Fan Hui, “was not a human move.”
Digital assistants get “real”
Long before smartphones, a company called Palm changed the way we manage our day-to-day data with the Palm Pilot, a personal digital assistant. These hand-held computers could give you instant access to your calendar, your contacts and anything else you wanted to remember – as long as you took the time to painstakingly enter it in the first place. Now touch screen convenience with these little devices! You had to use a metal stick, called a stylus, to poke at the screen or keyboard to input information. And that little sucker was always getting lost!
Then came Apple’s Siri. Maybe one of the most remarkable things about this feature when it first came out was that you could “teach” it over time. Within a few days, each owner’s Siri became a unique “personality,” shaped by the selected preferences and responses of the owner. In 2015, Apple took this customization aspect to a new level, with a voice recognition capability, so that Siri can be “trained” to respond only to the owner’s voice.
When the 2013 movie “Her,” presented a fictional story about a man falling in love with his digital assistant it didn’t seem like too much of a stretch to the general public. In the movie, the man’s closest friends even come to accept “her” as a friend.
The Internet of Things becomes ubiquitous
In 1996, computer scientist Karl Steinbach predicted that computers would soon be “interwoven into almost every industrial product. Today I have a Fitbit on my wrist, uploading my heart rate, steps and other data to a site on the Internet, where I can download reports about the quality of my sleep, my workouts and more. We may have actually reached the point when the Internet of Things is “not just talking about toasters and automobiles” but talking about ourselves.
AI infiltrates education
So far, I was just noticing all these apparently separate movements as cool stuff I liked to read and think about. Then it started hitting closer to home, to my work as a learning consultant. Let’s fast forward to May, 2016. That’s when we first learned that a computer programming class at Georgia Tech had been using an AI as a teaching assistant for an entire semester, completely unnoticed by most of the students in his graduate-level class. “Jill Watson,” (the name might have a been a clue) appeared to be friendly but a little green at the start of the semester, but as she learned more about the students she became more comfortable helping them with their homework assignments. At the end of the semester, the professor let the students in on the secret experiment. Most students admitted that the they could not tell the difference between Jill and a “real” TA. While a few students claimed to be suspicious from the start, the numbers suggest to me that Jill Watson actually passed the Turing Test, while I haven’t seen anyone else make this claim yet. Another application of AI in education predated Jill Watson by two years and is still in operation. This AI grades student essays and routinely performs well enough to be useful in real-life classrooms.
If the Singularity is here – does it really matter?
Maybe it really doesn’t matter, at this stage in the game. If we’re already using these assorted targeted types of AIs to crunch our big data, keep our bridges from collapsing, monitor our health, serve as our personal assistants, grade our homework and annotate our research papers, haven’t we as a society already slipped into the Singularity without even noticing? It may be, as some fear, the cultural equivalent of the original “singularity,” a huge black hole that threatens to devour mankind, or it could be the beginning of a beautiful friendship between man and the machines that have already begun to surpass them in so many ways.
Utterly written subject matter, appreciate it for entropy.