AI technology doesn’t have to be scary. Photo Credit: Graphic Design by Karen Clay

In recent years, the topic of Artificial Intelligence (AI) has occupied the collective psyche of our nation and our world. We have ever-growing knowledge of the availability of AI programs such as ChatGPT, Midjourney, Gemini, Co-Pilot, and a “gabillion” others for just about any task you can imagine. It’s no wonder many of us are bracing for the day when the Matrix or the Terminator jumps off the movie screen into real life! Well, I have a news flash for you. We have been integrating technology into our daily lives for tens of thousands of years!

Let’s start with a basic definition of AI, then delve into a bit of history on the evolution of how we came to be here relative to its use. AI is the simulation of intelligence using computer systems that are programmed to perform complex tasks such as reasoning, problem-solving, decision-making, perception, and language understanding. It is not so much a futuristic concept but something that has been integrated into various products that we use daily.

From practically the beginning of the existence of humankind, we have developed tools to enhance our daily lives in terms of our ability to calculate and communicate effectively. The abacus and earlier calculating devices represent some of the earliest tools used by humans to aid in mathematical calculations. These devices have a rich history and have evolved over millennia. 

Records of early counting devices made from bones or wood, date back to the Ishango Bone, a prehistoric tool discovered in the Congo, that dates to around 20,000 BCE. It features a series of notches that suggest it might have been used for arithmetic or astronomical calculations. Discovery of this device provides valuable insight into the cognitive and mathematical abilities of early humans, reflecting the importance of numerical and temporal record-keeping long before the advent of written language or more advanced mathematical systems. It represents one of the earliest known examples of a tool that might have been used for mathematical purposes.

The abacus is one of the earliest known calculating devices, with origins dating back to ancient Mesopotamia around 2300-2400 BCE. It was also independently developed in other parts of the world, including China, Greece, and Rome. The abacus allows users to perform basic arithmetic operations such as addition, subtraction, multiplication, and division. Skilled abacus users can perform these operations quickly and accurately, sometimes rivaling electronic calculators in speed. 

The Antikythera mechanism, dating from around 100 BCE, is considered one of the earliest analog computers. Discovered in a shipwreck off the coast of the Greek island of Antikythera, this device was used to predict astronomical positions and eclipses for calendrical and astrological purposes.

From this point, the growth of devices for computational tasks has grown exponentially and include:

  1. Mechanical calculators such as Schickard’s Calculating Clock (1623) and Pascal’s Calculator (1642)
  2. Early computers such as Babbage’s Difference Engine (1822) and Konrad Zuse’s Z3 programmable digital computer (1941)
  3. The ENIAC, the first general-purpose electronic digital computer (1946)
  4. The Dartmouth Conference, which is widely considered the formal founding of AI as a field of study (1956)
  5. Moore’s Law, a concept by Gordon Moore, co-founder of Intel, that states that the number of transistors on a microchip double approximately every two years, while the cost of computing is halved (1965)
  6. ELIZA, an early natural language processing computer program, developed by Joseph Weizenbaum at MIT, mimicked human conversation and was one of the first programs to demonstrate interaction with a computer using natural language (1966)
  7. IBM Deep Blue, the AI chess computer that defeated world champion Garry Kasparov (1997)
  8. The growth of Machine Learning and Big Data, where advances in machine learning algorithms and the availability of large datasets led to significant improvements in AI capabilities (2000s)
  9. The Transformer Architecture by Vaswani et al. which introduced the Transformer model, revolutionizing natural language processing (2017)
  10. OpenAI’s Generative Pre-Trained Transformer, GPT, a large language model pre-trained on vast amounts of text data (2018)

These advancements have transformed how we interact with technology and have paved the way for future developments in AI and beyond. Most notably, Moore’s Law has held true for several decades and has significant implications for the advancement of AI. Modern AI models, such as deep neural networks, involve millions, even billions of parameters that need extensive computational resources for training. The advancements predicted by Moore’s Law have made it feasible to build and train such models. History shows that we are continually evolving so we must be ever mindful that we do so in a constructive way.

Karen Clay, Clay Technology and Multimedia
Courtesy, Karen Clay
Karen Clay
Click Here to See More posts by this Author