What is Artificial Intelligence?

by Dr. Phil Winder , CEO

If you ask anyone what they think AI is, they’re probably going to talk about sci-fi. We sometimes even need to set the record straight during our AI consulting projects. Science fiction has been greatly influenced by the field of artificial intelligence, or A.I.

Probably the two most famous books about A.I. are I, Robot, released in 1950 by Isaac Asimov and 2001: A Space Odyssy, released in 1968 by Arthur C. Clarke.

I, Robot introduced the three laws of robotics. 1) A robot must not injure a human being, 2) a robot must obay the orders, except where the orders would conflict with the First Law and 3) a robot must protect its own existance as long as such protection does not conflict with the First or Second Laws.

2001: A Space Odyssey is a story about a psychopathic A.I. called HAL 9000 that intentionally tries to kill the humans on board a space station to save it’s own skin, in a sense.

But the history of AI stems back much further…

Early History

Talos: A bronze automaton from Greek Mythology

In Greek Mythology, Talos was a giant, bronze automaton that was created by the god Hephaestus.

He was given to Europa by Zeus to protect her while she was on Crete. He patrolled the island to protect it from pirates by throwing rocks.

There are many other references in mythology to automated agents. In China it was told that engineers would create mechanical men to impress the Emperor. And the world over inanimate objects were imbued with human qualities.

Dark Ages

During the dark ages, the claims became a little more dubious. In 1206 the Islamic Scholar Al-Jazari claimed to have created a programmable orchestra of human beings.

My favorite, around 1500 Paracelus claimed to have created an artifical man out of magantism, alchemy and sperm. Can you imagine what his experiments must have involved? “I’m going to take this magnet, this rock, then I’m just going to unbutton my trousers….”. I’m pretty sure this was the beginnings of the sex toy industry.

A famous one, in 1580, a Rabbi in Prague claimed to have created a clay man, called Golem, come to life. And even serious thinkers like Descartes proposed that animals were just complex machines.

Post-Renaissance

In 1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote “…reason is nothing but reckoning”.

Around the same time Blaise Pascal invented the mechanical calculator, the first digital calculating machine. Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division.

He also invented the binary numeral system and independently developed differential calculus (independent of Newton). But he began to envision a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically.

In the extreme, Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.

Around the 1950’s: Confluence

However, just before the 1950’s, several different discoveries significantly impacted the future of A.I.

In the early 50’s it was proven that the brain formed an electrical network of neurons.

Claude Shannon showed how all information could be represented by a limited number of bits.

Alan Turing defined the functional limitations of any computing engine.

These ideas and more, allowed people to believe in the possibility of creating an artificial brain.

The First Robots

In 1950 William Grey Walter created what became known as the first “intelligent” robot.

It consisted of a phototube, which is a gas filled chamber that looks like a lightbulb. When light hits the gas it alters the resistance through the gas. He attached this to vacuum tube amplifiers that drove motors. The result was a robot that could follow a light, much like a Moth.

But something else grabbed the attention of the people. Lights were placed near power outlets. When the robots ran out of battery they would drive to the lights to recharge. People started to believe that this simulated some sort of animal intelligence.

Seminal Moment: The Turing Test

Turing's seminal paper on AI: Computing Machinery and Intelligence

But the seminal moment was in 1950. Alan Turing published Computing Machinery and Intelligence. A landmark paper in which he speculated about the possibility of creating machines that think.

He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”.

This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.

The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

Church-Turing Thesis

Turing also developed a controversial theory along with Alonzo Church, which speculates that any algorithm that can be performed by a human can be replicated in a machine.

A subsequent theory by Dave Deutsch suggested that any physical process can be simulated by a universal computing device. Assuming: the laws of quantum physics describe every process, and we have a system powerful enough to compute those laws (e.g. quantum computers)

This rapidly gained a reputation in scientific circles and the general public became both very excited and very scared. In theory, if we have a computer that is powerful enough, we can simulate any natural process.

This includes us.

1955-1970

Between the 50’s and 70’s writers dreamed of dystopian futures. These publications helped distribute the promise of A.I. into the mainstream.

Around this time prominent scientists begane making grandiose predictions. In 1958 H. A. Simon predicted that:

within ten years a digital computer will be the world’s chess champion

In 1970 Marvin Minsky suggested:

In from three to eight years we will have a machine with the general intelligence of an average human being.

With all the attention, institutions were being funded all over the world. Between 1963 and 1970, DARPA funded MIT’s AI lab with the equivilent of $25M per year, for open ended research. There were no concrete goals.

1970’s

It would not last.

In 1973, the Ligthill report on the state of AI research in the UK criticized the utter failure of AI to achieve its “grandiose objectives”, which led to the dismantling of AI research in UK.

  • Limited computational power

Processors typical at the time could manage about 1 MIPS. Current computer vision implementations require between 10,000 and 1,000,000 MIPS.

  • Curse of Dimensionality

Many problems can only be solved in exponential time. “Toy” examples would never scale into useful systems.

For example, if we imagine that a computation took 10 seconds for one input. In an exponential time problem, if we had 10 inputs, it would take 371 years to compute.

If we had 20 inputs, it would take 3 trillion years. Considering the universe has only been around for 13.82 billion years, that’s probably not feasible.

  • “Common Sense” knowledge.

Many problems, e.g. computer vision, require enormous amounts of supplementary information about the world.

E.g. Our implicit knowledge about gravity defines how things “should” look.

1980-2000: Rise of the expert systems

The solution to the previous problems were constraints.

Von Neumann, back in 1948, had it right all along.

Student: “A machine cannot possibly think!”

von Neumann: “If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!”.

An expert system is a product designed to do one thing well. (Unix fans rejoice!)

By adding constraints to the problem, it becomes tractable.

2000’s: Finally

In 2000, it had been 50 years since Turing. Some feats were achieved, but they were very niche and not very relatable.

But in 1997, an IBM black box (of course, IBM has to sell black boxes!) called Deep Blue beat the reigning world Chess champion. 39 years after the initial prediction by H. A. Simon.

Why Now?

These successes were not due to a new magical paradigm. They already existed. They were not due to a stroke of genius.

The reason is due to tedious engineering and computing power.

DateMachine/ChipsetMIPS
1951Ferranti Mark 10.0008
1997Deep Blue11,380
2016Intel Core i7 6950X317,900

In 1951, the first ever chess program was produced on a Ferranti Mark 1. (It could only make one move).

The laptops in a modern laptop are roughly 400 billion times faster than they were in 1951.

So AI Now?

Yes, in a sense. If we take Hobbe’s and Turing’s view of what AI is, the simplest definition is “can a machine do a job at least as well as a human.”

In those terms, we’re already there for many tasks. E.g. car manufacturing. Image recognition. Language translation. Compared to the average human, computers are far better at these tasks.

But that’s not the real problem. A.I. is a concept. An idea. It’s a philosophical debate as to whether machines will ever truly be human.

For Engineers, the argument is interesting, but pointless.

Which brings me to my main point.

Stop saying A.I

Seriously, it makes no sense.

A.I. is a philosophical debate. In fact, it’s more of a debate about who we are, rather than what machines are.

Are we just a complex matrix of neurons that fire electrical pulses to achieve a complex behaviour?

Are we just biological machines?

Do we “think”?

#DataScience

Instead, I much prefer the use of the term Data Science. Data Science is the act of engineering value from data. It has concrete applications in both business and engineering.

I prefer the name data science because it more accurately describes the multidisciplinary requirements.

To find out more about data science, I’d recommend that you look at my other post: What is Data Science?

More articles

The Meaning of (Artificial) Life: A Prelude to What is Data Science?

Read more

Cloud-Native Data Science: Turning Data-Oriented Business Problems Into Scalable Solutions

Read more
}