Artificial Intelligence: A Brief History

 


The Evolution of AI

Artificial Intelligence is commonly thought to be a brand-new technology. The reality is that the concept of AI has been around for a long time. Aristotle (384-322 B.C.), first developed an informal system of syllogisms for proper reasoning, allowing one to generate conclusions mechanically, given initial premises. (1) I'm not going to go so far as to say that Aristotle invented artificial intelligence, but throughout history there have been important concepts and theories that helped pave the way for AI as we know it today. This article dives deeper into the history of artificial intelligence, highlighting the disciplines that contributed to its development, key breakthroughs, and the technologies that help to make AI possible.

Thanks for reading!

Disciplines that Contributed to AI Development

It's important to understand that artificial intelligence draws from multiple disciplines, each helping to make it possible. Stuart Russell and Peter Norvig do a good job of breaking down each of the disciplines in their book Artificial Intelligence: A Modern Approach. In the book, they discuss how artificial intelligence is a combination of philosophy, mathematics, economics, neuroscience, psychology, computer engineering, and linguistics. 

I recommend having a closer look at their work as they provide a much more thorough breakdown of the history of AI. For the purposes of this article, however, I'm going to be much more brief in my historical analysis.

Alan Turing can be considered a grandfather of artificial intelligence. He proposed that humans use "available information as well as reason in order to solve problems and make decisions, so why can't machines do the same thing?" Between 1943 and 1955, with the help of Turing's theory, Warren McCulloch and Walter Pitts developed what is now considered the first AI. They proposed a model of artificial neurons in which each neuron is characterized as being "on" or "off", with a switch to "on" occurring in response to stimulation by enough neighboring neurons. As a result of their work (and others), the term "artificial intelligence" was first coined one year later at a conference at Dartmouth University.

Key Breakthroughs and False Starts

The years between 1956 and 2024 have been marked by incredible breakthroughs and serious lulls in the advancement of artificial intelligence. Two main issues have traditionally hampered AI's development. The first issue is that in the mid-20th century, computers were extremely expensive and not widely available. The second issue is with computing technology itself. It wasn't until very recently that computers were capable enough to run artificial intelligence models like the ones that we see today.

The late 20th century was busy; thus, I'm going to summarize some of the key events. By no means is this list exhaustive: (2)

1951 -  Marvin Minsky and Dean Edmonds developed the first artificial neural network (ANN), called SNARC.

1952 - Arthur Samuel developed the Samuel Checkers-Playing Program, the world's first program that was self-learning.

1956 - The term "artificial intelligence" was first coined.

1958 - Frank Rosenblatt developed the Perceptron, an ANN that could learn from data and served as the foundation for modern neural networks.

1958 - John McCarthy developed the programming language Lisp, which became a widely adopted language for AI developers.

1959 - Arthur Samuel coined the term "machine learning" and Oliver Selfridge published "Pandemonium: A Paradigm for Learning", which became a "landmark contribution to machine learning that described a model that could adaptively improve itself to find patterns in events."

1964 - Daniel Bobrow developed STUDENT, an early natural language processing (NLP) program designed to solve algebra word problems.

1965 - Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi developed the first "expert system", Dendral, which assisted organic chemists in identifying unknown organic molecules.

1966 - Joseph Weizenbaum created Eliza, a highly celebrated program capable of engaging in conversations with humans.

1966 - Stanford Research Institute developed Shakey, the world's first mobile intelligent robot that combined AI, computer vision, navigation, and NLP. Shakey is considered one of the first self-driving vehicles.

1969 - Arthur Bryson and Yu-Chi Ho discovered a backpropagation learning algorithm that enabled multilayer ANNs, serving as a foundation for deep learning.

1997 - IBMs Deep Blue defeated Garry Kasparov, the reigning world champion, in a game of chess.

2009 - Rajat Raina, Anand Madhavan and Andrew Ng published "Large-Scale Deep Unsupervised Learning Using Graphics Processors," which presented the idea of using GPUs to train large neural networks.

2011 - IBM's Watson defeated Jeopardy!'s all-time human champion, Ken Jennings.

Technologies that Make AI Possible

I would be remiss if I didn't briefly discuss the technologies that actually make artificial intelligence possible. As mentioned at the beginning of the article, two main inhibitors to the development of AI have been access to computers and computing technology itself. That said, we have now hit a point in history where the technologies are in place for some real advancement.

I thought it would be fun to ask GPT 4o for some help. I asked it to describe to me some of the technologies that make artificial intelligence and machine learning possible. This is what it said:

  1. Algorithms and Models
    1. Neural Networks - mimic the human brain to recognize patterns and learn from data.
    2. Decision Trees - used for classification and regression tasks by splitting data into branches.
  2. Data
    1. Big Data - large datasets that provide the raw material for training AI and ML models.
  3. Computing Power
    1. GPUs and TPUs - specialized hardware that accelerates the training and inference of complex models.
  4. Frameworks and Libraries
    1. TensorFlow, PyTorch - software libraries that simplify the development and deployment of AI and ML models.
  5. Natural Language Processing (NLP)
    1. Techniques for understanding and generating human language, enabling applications like chatbots and language translation.
  6. Cloud Computing
    1. Provides scalable resources and infrastructure for AI and ML tasks, making it easier to handle large-scale computations.
As with many AI generated responses, they provide a good start, but it remains important to dive a little deeper:

Speech recognition and generation was invented between the 1950s and 1960s. In 1952, Bell Laboratories designed the Audrey system, which could recognize a single voice. A decade later, IBM came out with Shoebox, which was able to understand and respond to 16 words in the English language. (4)

Optical Character Recognition (OCR), or computer vision, was first developed in 1974. OCR is a technology that allows computers to recognize text printed in any font or typeface. Intelligent-character recognition (ICR) is a similar technology that can decipher hand-written text using neural networks. (5)

Between the 1970s and 1980s, computer generated imagery (CGI) was invented. CGI allowed for the creation of simple visual effects and graphics. (6) As an aside...it's amazing how far CGI technologies have come. Ever re-watch an old favorite, just to be disappointed by how bad the graphics are? I won't name names, but I wish they'd remaster some of those movies from my childhood!

If we look to the more recent past, we see the development of technologies such as multi-agent systems, planning and decision-making systems. Without going into too much detail, multi-agent systems are essentially many different agents, each assigned a specific task, that work together to achieve a complex outcome. IBM defines planning as, "the task of finding a procedural course of action for a declaratively described system to reach its goals while optimizing overall performance measures." (7) Finally, AI decision-making systems combine intellectual traditions from across computer science and electrical engineering to develop techniques for the analysis and synthesis of systems that interact with an external world via perception, communication, and action; while also learning, making decisions and adapting to a changing environment. (8)

Conclusion

It's really important to have a basic understanding of the history of artificial intelligence. It helps to shape our understanding, provide a context for where we are coming from, and should help to better understand where we are going. Not to mention, it's really quite interesting the fact that the concept of artificial intelligence has actually been around for 70+ years. Given that fact, this article should serve as a very brief overview of the developments of artificial intelligence, from the 1950s to the early 2000s. I also wanted to address some of the important technologies that have been used to help us get to the point that we are at today.

How I Used AI in this Article
  • Asked GPT 4o to provide me with some key technologies that have helped make AI and ML possible.
  • Dall-E generated the image at the beginning of the article.
  • GPT 4o was used to cite my sources in the Chicago format.


About the Author


Dan Doerr has dedicated his life to defense and security. He was introduced, first hand, to national defense and security while serving in the U.S. Army. Since then, he has achieved his Master's Degree from the Josef Korbel School of International Studies, focusing his research on International Security. Dan holds his CompTIA Security+ Certification and is an active volunteer for the Denver Chapter of the Information Systems Security Association (ISSA). He loves technology and when he isn't diving head first into the tech world, he likes to do the opposite by getting out into nature.


Sources

1. Russell, Stuart, and Peter Norvig. Artificial Intelligence: A Modern Approach. 3rd ed. Upper Saddle River, NJ: Pearson, 2010. https://people.engr.tamu.edu/guni/csce421/files/AI_Russell_Norvig.pdf.

2. Karjian, Ron. "The History of Artificial Intelligence: Complete AI Timeline." TechTarget, August 16, 2023. https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline.

3. Anyoha, Rockwell. "The History of Artificial Intelligence." Science in the News, August 28, 2017. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.

4. Sonix. "A Brief History of Speech Recognition." Sonix, accessed 20 May, 2024. https://sonix.ai/history-of-speech-recognition.

5. IBM. "What is Computer Vision?" IBM, accessed 20 May, 2024. https://www.ibm.com/topics/computer-vision.

6. Videozen. "The Evolution of AI in Video Generation: A Historical Perspective." Medium, April 18, 2023.  https://medium.com/@videozen/the-evolution-of-ai-in-video-generation-a-historical-perspective.

7. IBM. "AI Planning." IBM Research, accessed May 20, 2024. https://research.ibm.com/projects/ai-planning.

8. MIT EECS. "Artificial Intelligence + Decision-Making." MIT EECS, accessed May 20, 2024. https://www.eecs.mit.edu/research/artificial-intelligence-decision-making.


Comments

Popular posts from this blog

AI Development: A closer look at Narrow, General, and Super-Intelligent Artificial Intelligence

Unlocking AI: Understanding the Basics of Artificial Intelligence and Its Importance