History of artificial intelligence
In 1854, the British mathematician George Boole argued that logical behavior can be represented and expressed mathematically, as in a system of equations. Far ahead of his time, this thinking is what has earned him consideration as the forerunner of today’s computational sciences.
His work, known as Boolean algebra, laid the foundations of computer language. It would awaken the ambition of other scientists to create machines that, based on their ability to synthesize human thought and behavior through complex equations known as algorithms, could come to claim humanity.
Literature, robots, and the laws of robotics
Science fiction did its bit to idealize and normalize the concept of artificial intelligence (AI). Thus, two authors of this genre coined words that were as revolutionary then as they are common today.
In 1921, playwright Karek Apek was the first to use the word “Robot” in his play R.U.R. The Slavic word means “hard work”.
Isaac Asimov wrote the short story “Vicious Circle” in 1941, in which he left a legacy for literature and for the new science of computer science. From this work would be born the laws of robotics, in force to this day, and establish that:
- A robot shall not harm a human being or, by inaction, allow a human being to be harmed.
- A robot shall comply with orders given by human beings, except for those that conflict with the first law.
- A robot must protect its own existence to the extent that this protection does not conflict with either the first or the second law.
Computers and Algorithms
Konrad Zuse, a German engineer, completed the creation of the first functional computer in history in 1941. This machine could execute programmable commands, but with the deficiency of not having the ability to remember them.
Years earlier, in 1936, Alan Turing, the father of modern computing, published an article in which he introduced for the first time the concept of “algorithm”, necessary for programming.
In 1951, he published the essay “Computing Machinery and Intelligence”, where he evaluates the ability of machines to impersonate humans. This test is known as the Turing Test.
Despite all these advances, the term AI was not thought of as such. It was not until 1956 when the American computer scientist John McCarthy, after having convened prominent researchers from various disciplines in the so-called Dartmounth Conference, coined this term because of the various talks.
In this cycle of conversations, he realized that this would be a multidisciplinary effort and that it still lacked the technology for its development.
Artificial Neural Network
In 1957, the American psychologist Frank Rosenblatt designed the first artificial neural network. It was the first software that could learn based on trial and error.
Based on Rosenblatt’s work, Marvin Minsky wrote “Preceptors” in 1969, research that would become a mainstay in the study of neural networks for the development of AI.
Uncertain future for AI
During the following decades there were periods of euphoria and depression surrounding the development of AI. In 1979 the Stanforf Cart became the forerunner of autonomous vehicles and in 1996 IBM’s “Deep Blue” program managed to beat the then world chess champion.
The great challenge was not only the lack of technology but also the lack of capacity to generate, process and store enough information to develop these complex neural networks.
AI and its allies
With the liberation of the internet since 1993, the development of Big Data and the promise of the 5G network on the doorstep, advances in AI have multiplied.
Not long ago, in 2005, the American scientist Raymond Kurzweil, applying Moore’s Law predicted that by 2045, machines would reach a level of intelligence superior to that of humans. Thanks to the fact that the “evolutionary” capacity of technology is doubling year by year.
We cannot speak of an uncertain future but of a present in constant development. AI is already in our daily lives, in the search engine that learns our preferences, public lighting networks, security, medicine, the cell phone we hold in our hands.
This history of AI is not only not over, it is not just ending, it is being written today and we are no longer just users or spectators, but part of this phenomenon that is no longer a separate story from our own, this history is now being written at the same time.