Redazione RHC : 23 July 2025 08:29
Artificial intelligence (AI) is a discipline that aims to develop systems capable of emulating some of human cognitive abilities.
Over the years, AI has gone through different phases, with periods of fervent activity followed by periods of slowdown. In this article we will explore the history of artificial intelligence and its crucial moments.
In the 1950s and 1960s, numerous scientists and researchers contributed to the foundation of AI, benefiting from government investments that supported research in this field. Some of the key milestones that marked the birth of artificial intelligence:
In the 1970s, AI faced several challenges that slowed its progress. One significant event was the XOR Affair, presented by Marvin Minsky and Seymour Papert in their 1969 book “Perceptrons,” which highlighted the limitations of perceptron models in solving complex problems, such as replicating the behavior of the XOR function. In particular, it was shown that a single layer of connected units can only classify inputs that can be separated by a straight line or a hyperplane.
Another significant event that contributed to the slowdown of AI was the publication of the ALPAC Report in 1966, drafted by the Automatic Language Processing Advisory Committee. The committee highlighted the limitations of AI in the field of machine translation from Russian to English. Machine translation was a strategic goal for the United States, but the report emphasized that, despite dedicated efforts, machine translation systems had achieved disappointing results.
These analyses, combined with a low return on investment, led to a questioning of the effectiveness of artificial neural networks and a period of skepticism and disillusionment with AI. However, it is important to emphasize that the first AI “winter” also resulted in important lessons learned. The limitations of perceptrons highlighted the need to develop more complex and adaptable AI models. Furthermore, this period of reflection contributed to a greater awareness of the challenges and problems accompanying the development of AI.
The 1980s saw a significant resurgence in AI research and development, driven primarily by symbolic AI. This branch of machine learning focuses on representing knowledge and using rules and inferences to address complex problems. Expert systems and knowledge-based systems were developed as tools for solving complex problems by leveraging knowledge bases primarily represented as if-then rules. A notable example is SHRDLU, developed by Terry Winograd. SHRDLU was an AI program that interacted with the world through natural language, demonstrating some understanding of context and semantics, despite being based on rule-based approaches.
Besides expert systems, another milestone of this period was the Stanford Cart, a cart developed at Stanford capable of performing complex physical tasks, such as transporting objects through an obstacle-filled environment. This demonstrated the ability of robots to autonomously manipulate physical objects, opening up new perspectives for the practical application of AI. You can see The Stanford Cart navigate through a room while avoiding obstacles in the video below!
The 1980s were also characterized by the theoretical study of fundamental algorithms for AI, such as the back-propagation algorithm, popularized in 1986 by one of the fathers of modern AI, Geoffrey Hinton. This algorithm, which exploits error feedback in learning processes, improved the ability of neural networks to adapt to data, opening up new possibilities for AI. Learn more with this lecture by Hinton!
Despite remarkable progress, AI faced some challenges and setbacks in the late 1980s and early 1990s. In particular, expert systems have several limitations, including their dependence on initially provided information and limited adaptability across domains. Furthermore, maintaining and updating the knowledge base requires significant effort and costs. This led to cuts in investment in AI research, which at the time was still heavily dependent on government funding.
In recent decades, Artificial Intelligence has experienced a remarkable renaissance thanks to the explosion of machine learning (ML) and deep learning (DL). Some of the key developments of this period:
Introduction of Multi-Layer Neural Networks: A turning point in AI was reached in 2015 with the introduction of Deep Neural Networks. Models like AlexNet paved the way for neural networks capable of learning complex representations from data. DL has revolutionized many fields, such as natural language processing and computer vision.
GPU Hardware: The increase in computing power has been a crucial factor in the AI explosion. Advances in hardware technology, particularly the introduction of high-performance graphics processors (GPUs), have provided machines with the resources necessary to train complex models.
The annual ILSVRC classification competition: Launched in 2010, it has played a key role in spurring progress in image processing. The competition is based on the ImageNet dataset, with over 15 million images belonging to more than 20,000 classes. Thanks to DL methods, classification errors have been significantly reduced, opening up new perspectives in the field of computer vision.
AI Community: The evolution of AI has been driven by knowledge sharing. The openness of code, datasets, and scientific publications has played a crucial role. Open-source frameworks like TensorFlow and PyTorch, supported by major IT companies (Google and Facebook), have boosted development. This has allowed researchers and developers to access tools and resources for experimentation. ML competitions have become reference platforms for sharing models, acquiring datasets, and collaborating among researchers.
Private Investment: High-tech giants have recognized the potential of AI to transform their products and services and open up new business opportunities. As a result, they have allocated considerable funds to AI research and development.
The history of AI is characterized by periods of excitement and discovery followed by moments of challenge and setbacks. However, in recent decades, AI has made significant progress thanks to the evolution of machine learning and deep learning. Symbolic AI pioneered knowledge representation and the use of rules and inferences, while machine learning and deep learning have revolutionized the approach to machine learning, enabling machines to learn from large amounts of data. As we continue to explore the possibilities of AI, it’s crucial to also consider the ethical and social challenges that accompany its development and use. AI offers great promise, but issues such as privacy, security, liability, and social impact must be carefully addressed to ensure its responsible and beneficial use. The RHC AI group aims to write informative content, with in-depth explorations of the topics discussed in this article and much more! More details on upcoming articles can be found here, stay tuned!