From Personal Assistants to Robotaxis
Every day, millions of people casually interact with AI-driven personal assistants like Amazon’s Alexa, Apple’s Siri, and Google Assistant. These digital companions exemplify how AI (Artificial Intelligence) has embedded itself into our lives. We are also beginning to see emergence of far more autonomous forms of AI. For instance, robotaxis – driverless ride-hailing services – are already being tested and deployed in cities across the world. Unlike personal assistants that respond to commands, these vehicles independently navigate traffic, make complex real-time decisions, and pursue the overarching goal of safely transporting passengers to their destinations.
It appears that AI has suddenly become all pervasive, not only powering personal assistants and driverless cars, but also search engines, navigation tools, customer service chatbots, smartphone cameras, and even the algorithms behind entertainment and social media platforms. The roots of this transformative technology, however, trace back nearly seven decades, to a summer conference that gave Artificial Intelligence both its name and its mission. Since then, it has been creeping upon us, stealthily for decades, and with unprecedented speed of late.
The Birth of AI: Dartmouth 1956
The term Artificial Intelligence (AI) was coined by John McCarthy in 1956 while organizing the Dartmouth Summer Research Project on Artificial Intelligence. Along with Marvin Minsky, Nathaniel Rochester, and Claude Shannon, McCarthy proposed using the phrase “artificial intelligence” in the formal conference document.
Their aim was to explore how machines could perform tasks that, if carried out by humans, would be considered as having been done with intelligence, that is, with reasoning, and learning to solve a problem. The Dartmouth meeting is widely regarded as the official birth of AI as a research discipline, setting the long-term goal of building machines capable of mimicking human reasoning and behaviour.
At that time, however, the idea was seen as almost utopian. Computing power was extremely limited, and the availability of information for analysis and inferencing was sparse. Still, the ambitious vision of the conference planted the seeds for decades of innovative work in computing and cognitive science.
Machine Learning: Teaching Machines to Improve
One of the most influential concepts to emerge from early AI research was Machine Learning. The term was coined by Arthur Samuel in 1959, in the context of developing computer programs that could improve their performance at tasks through experience. Samuel described machine learning as “the field of study that gives computers the ability to learn without being explicitly programmed.” The definition emphasized a crucial idea: computers could adapt and improve through empiricism rather than relying solely on rigid programming. He demonstrated that by learning from experience, machines could learn strategies and improve their performance in the game of checkers.
Historical Significance
Machine learning has since become a foundational pillar of AI, enabling systems to perform tasks like:
- Pattern recognition (e.g., handwriting, speech, images)
- Prediction (e.g., stock prices, weather, economic parameters)
- Autonomous decision-making (e.g., self-driving cars, industrial automation, virtual assistants)
By allowing machines to learn from data, machine learning bridged the gap between symbolic AI (rule-based systems) and modern adaptive AI (which also learn from experience), aided by breakthroughs in neural networks, natural language processing, and generative models.
A vivid modern manifestation of this can be seen in AI-driven personal assistants like Alexa, Siri, and Google Assistant. These digital companions leverage Natural Language Processing (NLP) to understand spoken language and interpret the conversation tone; apply Machine Learning (ML) to improve their responses over time, by adapting to user preferences, and personalizing responses; and harness Deep Learning and Neural Networks to detect user intent, sentiments, and the subtle nuances of human speech. They exemplify how decades of research have converged into real-world tools that provide seamless, intuitive, and interactive experiences, turning once-futuristic visions into everyday convenience.
Expert System: Codifying Knowledge into Machines
Another earliest practical offshoot of AI research was the Expert System, a concept popularized by Edward Feigenbaum at Stanford University. Feigenbaum stressed that intelligent systems derived their power from the inference mechanisms, and the knowledge encoded within them. Two Expert Systems which originated from the Stanford AI group under Feigenbaum’s leadership were:
- DENDRAL (1965): created for chemical analysis, which helped in identifying organic molecules.
- MYCIN (1972): created for medical diagnosis, particularly of infectious diseases.
Expert systems proved that AI could provide valuable support for inferencing in specialized domains, foreshadowing modern applications in healthcare, business, and engineering.
Decision Support System (DSS)
In parallel, Peter G.W. Keen and Scott Morton advanced the idea of Decision Support System (DSS), formalized in their 1978 book. Designed for problems where all decision rules are not known in advance, a DSS helps managers and professionals compile and process information from various sources, analyse scenarios, and make informed choices, especially in semi-structured or unstructured situations where human judgment remains crucial. The core characteristics include:
- Interactive user interfaces.
- Support for scenario analysis, data manipulation, and hypothesis testing.
- Integration of data, models, algorithms, and personal knowledge for inferencing and decisions.
Both Expert Systems and Decision Support Systems highlighted the key dependencies for progress of AI: the need for data, and the need for computing power. Both these resources remained scarce for decades. That, however, began to change dramatically in the 1990s.
The Web Revolution and the Rise of Big Data
The invention of the World Wide Web by Tim Berners-Lee in 1989 at CERN transformed the information landscape. When the web became publicly accessible in 1993, it rapidly evolved into a global repository of information and knowledge, filled with text, graphics, images, and videos.
This explosion of data provided the raw material AI systems needed to learn and improve. Coupled with exponential growth in computing power, it enabled researchers to build increasingly sophisticated mathematical and statistical models, moving AI far beyond its early limitations.
Search Engines: Unlocking the Web’s Potential
While the web was rich in information, it needed effective tools for navigation. The birth of search engines for information on the internet played a crucial role in making this vast ocean of information accessible and analysable:
- Yahoo! Search (1995): Began as a directory in 1994, created by Jerry Yang and David Filo, and evolved into a search engine in 1995, helping users explore Yahoo’s expanding directory.
- Google (1998): Officially launched by Larry Page and Sergey Brin in August 1998, Google revolutionized search with its PageRank algorithm, which ranks web pages based on their importance within the network, providing thereby faster results than competitors.
- Baidu (2000): Founded in China, Baidu became the dominant search engine for Chinese users, shaping how millions accessed and consumed information.
Search engines not only democratized access to information but also enabled AI researchers and businesses to download, organize, and analyse massive volume of data and information, fuelling advancements in machine learning, and data-driven decision-making.
Early Natural Language AI: ELIZA
Even before the web, experiments in natural language processing hinted at the potential of AI-human interaction. ELIZA, developed by Joseph Weizenbaum at MIT between 1964 and 1967, simulated a conversation using simple pattern-matching and substitution.
ELIZA was a pioneering rule-based language model that generated responses based on keyword matches in user input. While it lacked true comprehension, ELIZA was revolutionary in showing how computers could give the illusion of understanding.
From ELIZA to Large Language Models (LLMs)
Modern Large Language Models (LLMs) are the evolutionary successors of ELIZA. Whereas ELIZA relied on fixed rules and limited pattern matching for generating sensible responses, the initial LLMs were designed to generate highly coherent, contextually appropriate original text. The sophisticated newer versions of LLMs are integrated with multimodal architectures and generative AI models and create not only original text but also original images, graphics, and videos from text prompts.
In summary:
- ELIZA (1960s): Rule-based, keyword-driven conversation simulation.
- LLMs (2000s onward): Models powered by deep learning that can be trained to generate original creative writing which can be accompanied with multimodal embellishments.
Extant LLMs are deep learning engines which fuel generative AI – such as Chat GPT, Perplexity, and Gemini. In many ways, today’s LLMs carry forward the lineage of conversational AI which began with foundational experiments like ELIZA, on a vastly expanded scale.
Chatbots: From Scripts to AI-Powered Assistants
Chatbots are computer programs designed to simulate human conversation through text or voice commands. They engage with users in a natural, human-like way to answer questions, provide customer support, or assist with various tasks.
Modern chatbots have evolved along two lines:
- Scripted Chatbots: Relying on pre-programmed rules, these systems answer basic, predictable questions.
- AI-Powered Chatbots: Relying on machine learning, often built on LLMs, these systems understand varied phrasing, process complex requests, and generate adaptive, personalized responses.
Applications have spread across customer service, digital assistants, marketing, and information retrieval, making chatbots one of the most visible manifestations of AI in everyday life.
Generative AI: A New Frontier
The next big step in Generative AI came in 2014, when Ian Goodfellow and his team introduced Generative Adversarial Networks (GANs), a method that can create very realistic synthetic data, such as images that look almost real. Around the same time, researchers also developed Variational Autoencoders (VAEs), which help in learning and generating complex patterns in data. In 2017, the transformer architecture was introduced, which made it possible to understand and generate language with much greater accuracy. These methods – GANs, VAEs, and transformers – became the foundation of Generative AI, a type of artificial intelligence that can create new content such as text, images, audio, video, and even code (computer programs), based on given instructions or prompts. A well-known example is ChatGPT (Chat Generative Pre-trained Transformer), an advanced AI chatbot developed by OpenAI that uses the transformer approach to generate human-like conversations.
From Chatbots to AI Agents and Agentic AI
Chatbots started out as simple programs that could respond to questions and follow scripts. Over time, these systems have become smarter and more capable, leading to the development of AI Agents and now Agentic AI.
- AI Agents: These are helpful digital assistants that can carry out specific tasks on your behalf. For example, a scheduling assistant that automatically books meetings in your calendar, or a shopping bot that compares prices online and gives you a shortlist of the best deals. They work well within clear, predefined boundaries.
- Agentic AI: This goes a step further. Instead of just following instructions, an Agentic AI can set goals, gather information, analyse it, draw inferences, make decisions, while continuously learning from experience to improve over time. An Agentic AI doesn’t just react to inputs – it actively pursues outcomes.
Take self-driving cars as an example. They don’t just stop at red lights or follow lane markings. Their bigger goal is to get passengers to their destination safely and efficiently, while observing the traffic rules. To accomplish these goals, they must adapt to unpredictable traffic situations and continuously improve their driving skills through empiricism – learning from actual experience.
The Uncertain Future: The Genie from the Lamp
With LLMs, Generative AI, AI Agents, and Agentic AI advancing rapidly, humanity faces a critical question: Will machines begin to control mankind?
As history has shown, every leap in AI – from Dartmouth to machine learning, from expert systems to the web and search engines, from ELIZA to LLMs, from AI agents to Agentic AI – has expanded both opportunity and risk.
While the promise of unprecedented progress in medicine, science, creativity, and human productivity is real, so is the potential for misuse, bias, or unintended consequences. The outcome will hinge on humanity’s ability to collaborate globally, establish ethical safeguards, and guide AI development toward constructive purposes.
The metaphor of the Aladdin’s Genie fits well. The power of AI, once unleashed, cannot easily be put back. Whether this power will serve humanity or threaten it depends on how responsibly we manage it.
The story of AI is still being written. Time will tell whether it becomes the force that uplifts humanity or undermines it.
#AI, #ArtificialIntelligence, #ML, #MachineLearning, #AIAgent, #AgenticAI, #ELIZA, #Robotaxis, #SelfDrivingCars, #NeuralNetwork, #DSS, #ExpertSytems, #Bot, #ChatGPT, #Alexa, #Siri, #GoogleAssistant, #Perplexity, #Gemini