A brief history of Artificial Intelligence (and a glance at the future)

We can think of natural sciences, as sciences that reverse engineer the system of the natural world. For instance, neuroscientists study the human brain, collect and analyze its data, and attempt to understand the way it operates.

On the other hand, we can think of computer science, as the science that develops a system — a technological world (instead of reverse engineering it). For example an artificial intelligence engineer, uses data and statistical methods in order to create a “virtual brain”, one could say.

In this context, computer science and natural sciences progress towards the same direction, but begin from opposite sides. Moreover, they are bound to meet somewhere in the middle, at the notable “Singularity Point”, the hypothetical moment at which AI’s intelligence will surpass human intelligence. Nevertheless, there is still a long way to go. That being so, let’s start things from the scratch.

The creation of Adam — AI — Hand — Human Robot Hand Painting
The creation of Adam (Revised)

Alan Turing -the father of AI, wrote:

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried. We can only see a short distance ahead, but we can see plenty there that needs to be done

Can a computer talk like a human? Source: Alex Gendler TedEd Talk

How does one approach the solution to such an ambitious task? Before diving into scientific imagination scenarios, let's first consider the stepping stones that brought AI to where it is today. Let's in short, dive into the history of AI.

Before Turing Era

Human imagination ”conceived” the concept of A.I thousands of years before it was put into practice. The first mention of it occurred in antiquity and more specifically in Greek mythology. Talos, was a mythical bronze giant protecting Minoan Crete from invaders (1). According to the prevailing perception, Talos was not born but it was crafted by Hephaestus, the god of fire and iron.

Talos — The first AI
Talos — The first AI

Such legendary automata are found throughout human history, in stories and novels (e.g Goethe’s Faust). Beyond myths and legends, Aristotle defined the concept of reasoning (deduction) as:

A deduction [sullogismos] is an argument in which, if certain things are supposed [i.e. the truth of the premises], then something other than the things supposed [i.e. the conclusion and its truth] follows of necessity from their being true [i.e. from the premises being true]. (Prior Analytics, l. l,24b 18–20)

Aristotle’s research spirit laid the foundations of logic and thus the foundations of science and artificial intelligence. From antiquity to WWII the mathematical ecosystem, the philosophical foundations, and the technological patents, were developed step by step, laying the groundwork for what is now officially called “The Turing Machine”.

After Turing Era

Alan Turing was an English mathematician & cryptographer. He is known as the father of AI. In his work, ”On Computable Numbers, with an Application to the Entscheidungs problem” he demonstrates that there are some mathematical problems that cannot be solved by a fixed, defined (aka. deterministic) process, which he characterized as

”a process that can be performed by an automated machine”.

Therefore, there was an imperative need to create a system capable of “taking decisions” on its own. This need, culminated during World War II, where Nazis encrypted messages in Morse code, using a crypto-system called ”Enigma” and transmitted messages relevant to where the next attack would take place. Cryptanalysis (cracking) of Enigma was made possible thanks (god) to Turing, who created a machine that automated the process. In that way, he laid the foundations for the beginning of computer science as well as A.I.

In the 1940s and 1950s, the scientific community began discussing the creation of an artificial brain, examining how feasible and possible this could be. Claude Shannon’s publication ”A Mathematical Theory of Communication” was the point of departure for what we now call ”Information Theory”. At the same time, Turing developed the “Computation Theory”. Some years later, in 1958, Frank Rosenblatt invented the perceptron algorithm.

The perceptron is inspired by the human brain’s neurons. Perceptrons can be thought of as artificial neurons with synapses that interconnect with each other and exchange information -similar to the analog brain neurons that exchange electrical signals. The percepton is the foundation of the Neural Networks (i.e networks made of artificial neurons).

The twentieth century was also the period during which the scenario of creating an intelligent machine appeared strongly in literature and cinema. It was the period when neuroscience suggested that the brain is made of a network of neurons. These facts, as well as scientific biological, and technological breakthroughs, marked a new beginning and suggested that it may be possible to emulate the brain using technology. New York Times once wrote about the Percepton:

The Navy revealed the embryo of an electronic computer that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

A.I officially begun to emerge as a science discipline around 1952, and by 1956 it had emerged as an academic field.

Turing’s Test

A.I’s primary goal is to create a machine that can “think and make rational decisions”. However, the concept of thinking is quite general and difficult to define precisely. In the first paragraph of the conference paper ”Computing Machinery and Intelligence”, Turing wrote:

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another […]

He, later on, argues common claims that aim to disprove this main idea. An example of those attempts is the ”Lady Lovelace’s Objection”, which states that machines will do what we order them to do. In other words, she states that since machines are deterministic, they would always output fixed, expected results. We now know (something that Turing assumed to be true) that using statistical methods, we can create a machine that ”decides” on its own. To face these challenges and create the abstraction of the Turing machine, Alan Turing invented the notorious Turing’s test, also known as the Imitation Game. If a machine is able to perform an intelligent process (such as telex talk) with a human, such that a third party cannot separate which of the two entities in the process is the human and which is the machine, then the machine has passed the Turing’s Test successfully.

In order for a machine to pass a Turing’s test, it has not only to be able to ”be intelligent” in some sense but also to find the balance between the two following propositions:

Rational behavior is sometimes unhuman: No human can calculate a complex double integral in less than a few seconds, but a machine could. This is an example of ”too rational to be human behavior”, that a Turing machine would avoid.

Human behavior is sometimes irrational: Aristotle stated that human being has a rational principle1. Therefore humans are at most rational beings. But, even in the most intelligent humans, there are certain aspects of their behavior that are not rational. For example, a student that attempts to solve a math equation may do false calculations.

Turing’s Propositions about AI
Turing’s Propositions about AI

The birth of AI (Dartmouth Workshop, 1956)

The Dartmouth Summer Research Project on Artificial Intelligence marked the official beginning of AI as a computer science branch. Bright scientific minds decided to meet, to discuss and brainstorm on one main idea:

[To find out if] every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it (2).

Some of the greatest minds of the century attended this workshop, including John Nash the father of the Nash Equilibrium (Game Theory), Claude Shannon the father of Information Theory, John McCarthy the father of Lisp and memory garbage collectors, and others.

A.I Winters

The term “AI winter” (inspired by the term “Nuclear Winter”) refers to the periods in human history, in which the interest and the funding in the research of AI declined, as a result of overestimating the current technological capabilities. The lack of computational power and memory were the greatest obstacles that the AI community had to face. Another important asset was missing too: Data

In 1965 Gordon Moore Intel Co-Founder formulated the famous Moore’s Law:

Computing will increase in power and decrease in cost at an exponential pace.

This assumption that emerged from his careful observations, was proved by history to be true, marking the beginning of a new era.

The Present: Deep Learning Era

Since Dartmouth Workshop, there has been exponential progress in every aspect of computer science, AI included. In this section, we are going to discuss the main milestones of AI, from a high-level point of view, and later on, we will dive deep into the theory that was used for the implementation of the phishing URL classification system: Machine Learning.

In general, AI can be subdivided into two other branches: Symbolic AI and Machine Learning. Symbolic AI is also known as GOFAI: Good OldFashioned Artificial Intelligence. It can be thought as a rule-based AI made of If — Then conditions.

On the other side of the coin stands machine learning. Machine learning is based on data. Computers instead of using explicit instructions to make decisions are trained on the data and try to predict the output in statistical-based ways. Although these algorithms are at their core deterministic, their behavior is usually referred as stochastic.

The Branches of Artificial Intelligence (AI)
The Branches of Artificial Intelligence (AI)

The stochastic nature of these methods lays in the ”non-explicit” way that they perform tasks, such as classifying objects. It’s not the code that changes (maybe with Neural Networks being an exception to this rule). It is the data. Their results -including NNs- are highly dependent on the data they are trained on. This fact is the reason the phrase ”data is the new gold” went viral over the last years.

Machine learning expands in two big branches. Statistical learning and deep learning.

Different types of neural networks
The different types of neural networks

Different types of neural networks are used for different types of tasks. The most recent type of those presented are the GANs, invented in 2014. GANs are neural networks, that compete with each other! A low blow for the AI Utopia dreamers.

“Πόλεμος πάντων μεν πατήρ ἐστι πάντων δὲβασιλεύς, και τους μεν θεους ἔδειξε τους δε ἀνθρώπους, τους μεν δούλους ἐποίησε τους δε ἐλευθέρους.”

“War is father of all, and king of all. He renders some gods, others men; he makes some slaves, others free.”

Heraclitus

John Nash, not only attended Dartmouth’s workshop but also lived long enough to see his contributions in Game Theory, to be applied in AI. Deep learning boosted the hopes for the great ambition: The creation of a Generalized Artificial Intelligence. But we still have a long way to go

Some philosophical considerations

Artificial Intelligence is closely related to philosophy, due to the very nature of the idea of simulating consciousness. Moral, epistemological, and even ontological questions -such as the existence of free will- emerge. An ideal AI machine that acts intelligently, would suggest that ”thinking” can be modeled as a deterministic process. If we were able to create an artificial brain isomorphic to our biological one, then who could argue that there is a possibility our brain is also deterministic? It would certainly be a strong indication. Descartes (and we) wouldn’t be glad to hear about it…

Some of the questions that are to be answered are the following:

  • Can a machine act intelligently (The Turing’s dilemma)?
  • What moral system should be enforced on an intelligent machine?
  • Can a machine be creative?

Although these questions may seem abstract too philosophical and non-practical at first, this is far from the truth. Self-driving cars are already on the production lane, and on the road. These machines make ethical decisions and answer to dilemmas isomorphic with the “Trolley Problem”.

A train is hurtling towards five people. You are called to make the choice-of pulling a lever and diverting the train into a direction that will kill one innocent human, or leaving it as it is killing five innocent humans

On the other hand, GAN networks have been used to produce new content showing some kind of creativity.

World’s first AI generated painting with GAN’s
World’s first AI generated painting with GAN’s
GanGogh — Creating art with GANS

Recently, another groundbreaking AI announcement was made: Generative Pre-trained Transformer 3 (GPT-3).

Among others GTP-3 has a sense of… humor:

“I have a bad joke but my delivery is good”

In that context, questions such as ”Can a machine be creative?” have already been partially answered. It can. The more AI is advancing, the more we should invest in its philosophical research. The need for philosophical research is as great as for scientific research, now, maybe more than ever.

References

(1) P. Grimal, “The dictionary of classical mythology,” Oxford, 2000. 16

(2) ”A Proposal for the Dartmouth Summer”; (August 31, 1955); John McCarthy; Marvin L. Minsky; Nathaniel Rochester; and Claude E. Shannon

DevSecOps & Cloud Engineer