A Brief History of AI

Humans have been using machines to augment our capabilities for a long time, so it’s only natural that we’ve come to a point where we’re looking to replicate our cognitive processes in some of those machines. Here’s a brief history of artificial intelligence. If you search the events online, you will find that each result provides a slightly different list of key events. These are a few that I’ve called out for our purposes as learning professionals.

1763: Mathematician Thomas Bayes develops Bayesian inference, a decision-making technique that becomes adopted for teaching machines (and people) how to make decisions using pattern recognition and predictions based on probability.

1837: Charles Babbage invents “the analytical engine,” a machine designed to perform mathematical calculations. The machine requires instructions—a program—to perform this task. His colleague Ada Lovelace writes the first program to work on his prototype. Many historians consider Babbage to be the inventor of what would later be called the computer, and Lovelace the first programmer.

1898: Inventor and electrical engineer Nikola Tesla suggests that it might be possible to build a machine that is operated through a program, using a “borrowed mind” and wireless communication.

1939: Westinghouse unveils Elektro, the first robot. This machine can deliver a recorded response to a limited number of questions, walk, smoke a cigarette, and blow up balloons. He is accompanied by his robotic dog, Sparko. The machine is an entertaining curiosity, and not a serious attempt at artificial intelligence, but it draws attention to the potential of what would eventually become known as robotics.

1943: Warren S. McCulloch and Walter Pitts suggest that building a network of artificial neurons could create a machine that could think, using the neurons’ on-or-off firing system (later binary code).

1950: With his famous opening line, “I propose the question, ‘Can machines think?’” Alan Turing predicts that machines might one day mimic the cognitive functions of humans. He proposes a test to identify this phenomenon, which later becomes known as the Turing Test. While the test sidesteps the definition of intelligence altogether, Turing proposes that as long as we can be convinced that we are communicating with a person, we can consider that machine to be “intelligent.”

1955: John McCarthy coins the term artificial Intelligence at a conference convened at Dartmouth College, in Hanover, New Hampshire. The conference is one of the first times that computing scholars contemplate the use of human language to program computers; the use of neural nets to simulate human thought-processing in computers; machine learning, a “truly intelligent machine [that] will carry out activities which may best be described as self-improvement”; the ability of a computer to form abstract conclusions and “orderly thinking”; and creativity. While the ambitious conference doesn’t achieve everything it set out to do, it establishes the blueprint for progress in AI and machine learning from that point up until the present day (McCarthy et al. 1955).

1997: IBM’s “supercomputer,” Deep Blue, becomes the first computer to beat a human chess champion in a match against grandmaster Garry Kasparov. Many doubt that a machine could really have performed so well and accuse IBM of cheating. The computer is “too human” to be credible.

2011: IBM Watson defeats the best human players in the popular television game show Jeopardy! Although a stunning achievement, the victory is nowhere near as “intelligent” as it appears. Watson is running a simple program that searches a database and provide a response faster than its human competitors. It is, however, one of the first times that a computer is able to understand and respond to human speech, paving the way for many uses of natural language processing in future applications.

2014: “Eugene Goostman,” a computer program known as a chatbot, appears to have fooled enough judges to be the first AI to pass the Turing Test. Further scrutiny shows that although the program’s performance is interesting, it fools only a third of the judges and avoids answering some of the questions on the test, invalidating the result.

2016: Russia deploys AI to successfully influence the U.S. presidential election by using bots to post comments in social media designed to mislead voters and suppress voting activity by certain types of people. This is not the first time—nor the last—that Russia and other actors have successfully influenced the outcome of an election (Kamarck 2018).

2017: DeepMind’s AlphaGo caps a series of victories against humans in what is considered the most complex game in the world, Go. In a three-game match, the machine defeats world champion Ke Jie, who comments, “I thought I was very close to winning the match in the middle of the game, but that might not have been what AlphaGo was thinking” (Russell 2017).

2022: OpenAI releases the third version of ChatGPT. Demonstrating the power and potential of AI in autoregressive generative language models.

Where Is AI in Education and Talent Development?

The International Data Corporation (IDC) forecasts that businesses worldwide will be spending $77.6 billion on cognitive and AI systems by 2022. The highest anticipated spending is for:

  • automated customer service agents
  • automated threat intelligence and prevention systems
  • sales process recommendation and automation
  • automated preventive maintenance
  • pharmaceutical research and discovery
  • consumer shopping advisors and product recommendations
  • digital assistants for enterprise knowledge workers
  • intelligent data-processing automation.

Yet, when we look at the field of education, we see that organizations are lagging behind in many respects. According to one 2020 benchmarking study, 6 percent of Fortune 500 employers include chatbots in their recruiting and onboarding experience, and 8 percent deliver recommended jobs based on candidate profiles. In a related study, 14 percent of Fortune 500 companies offer new and prospective employees semantic search capabilities (Starner 2020).

There are many notable exceptions, however, and although I cannot possibly mention them all, here are a few that come to mind.

Ashok Goel, a professor at Georgia Institute of Technology, famously used an AI program to simulate a teaching assistant in 2016. “Jill Watson” supported students in the computing science master’s program, answering questions about course content and clarifying assignment directions. She communicated with students through email. The original version of the program was so effective that some students nominated Jill for an award for being an outstanding TA. Today the program continues to grow, and a new, more advanced AI, Jill Social Agent, was released in 2020. Now everyone knows that Jill is an AI, and she continues to serve as an example of practical AI applied to education.

In India, educators are attempting to overcome a severe shortage of qualified teachers with innovative AI approaches. In many schools, Indian students are interacting with AI for:
Adaptive practice: The algorithm selects appropriate practice activities based on individual performance to keep the student interested and challenged just enough to work hard to get to the next level without becoming discouraged.
Personalized content: An AI “teacher” provides content tailored to the academic level of each student, based on previous performance data.
Macro diagnostics: Algorithms predict the needs and performance of large groups of students based on historic performance data tracked during these individual engagements with AI.

While I had to look far and wide for examples outside of India and China of AI in actual use in training and education today, there is much more potential than might first appear. The use cases being developed to drive performance in other industries can be easily adapted to applications for learning performance if we know where to look. And there are many voices in the talent development and education fields who speculate about a new future in which AI-enabled learning is widespread and highly effective.

There’s no doubt that many of us recognize the potential of these powerful and deeply intertwined sciences and technologies but given how many examples are already in common use in other industries, it seems as though we’re falling behind. While we can’t catch up overnight, we can begin by discovering how we can use AI to analyze performance data, tailor learning journeys to individual learners, and interact directly with learners in new and engaging ways. Much as an AI can come up with a chess move or Go strategy that might never have occurred to a human to play, we may find that learning itself is transformed once we learn how to unleash the power of AI while we control its effects to accelerate learning and deepen retention.

Maybe we learning professionals just haven’t had time to envision a world in which learning and performance are enabled by AI. Maybe we lack the funding or sponsorship to make it happen right now. Or maybe most of us are just a little afraid of it. One thing is working to our advantage in this effort. Unlike banking, marketing, financial services, customer care, agriculture, and the many other industries that are already deploying AI successfully, the learning profession is founded on many of the same principles that drive AI. That’s because the science of learning and the science of building smart machines are actually part of the same multidisciplinary quest.

Neuroscience and AI Are Converging

One of the most exciting developments of the last few years is the way two seemingly unrelated scientific fields, neuroscience and machine learning, have been moving toward each other. As we learn more about the brain, we’re uncovering the secrets of how neurons organize themselves, communicate, recognize patterns, make decisions, form memories, and retrieve those memories. These are all functions that a self-learning AI must also perform to complete its task. The more we learn about the brain, the more insights we can apply to enhance our AI models. For example, one popular AI model is the neural network, based on the structure of the brain of layers of neurons connected in a nonlinear fashion.

In a similar way, the more sophisticated our AIs become, the greater insights we can gain about how our biological brains work. AI is being used to map the 100 billion neurons in the human brain and discover how these neurons communicate with one another.

Although synergy is an overused word, I can think of no better way to describe the way these two scientific fields are driving each other forward.

But we have some catching up to do first.

In Lewis Carroll’s Through the Looking-Glass, the Red Queen tells Alice, “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” (Lewis Carroll, Through the Looking-Glass)

That’s a good description of where we stand when it comes to preparing for a future that has already begun.

Book Your BrainyBot™ Discovery Session with Margie

Learn how you can transform the learning experience in your organization. Start your journey with 30-minute discovery session.

ⓒ 2024 – LearningToGo. All Rights Reserved. T&C | Website Design by Chevaun