Artificial Intelligence
In 1637 Rene Descartes, the French mathematician and philosopher, predicted that it would never be possible to make a machine that thinks as humans do. That was a rather astonishing observation considering that the concept of the analytical machine was devised by Charles Babbage only two hundred years later. Babbage never completed his analytical engine but his theories laid the early foundation for artificial intelligence.
The father of Artificial Intelligence is British mathematician Alan Mathison Turing. In 1950 he declared that in the future there would be a machine that would duplicate human intelligence. He devised a specialised test, known as the “Turing test”, to be used to prove artificial intelligence. In the test, a human and a computer hidden from view would be asked random identical questions. If the computer was successful, the questioner would be unable to distinguish the machine from the human.
In 1947 Turing argued that the brain could itself be regarded as a computer. Working on his Automatic Computer Engine, he declared that he was more interested in producing models of the action of the brain than in the practical applications of computers.
AI laboratories
The first conference on artificial intelligence was held at Dartmouth College, New Hampshire in 1956. It led to the establishment of the AI laboratories at MIT (Massachusetts Institute of Technology) by Marvin Minsky and John McCarthy (who invented the AI computer language called Lisp) and Stanford University by Edward Feigenbaum and Joshua Lederberg. Herbert Simon and Allen Newell of the Rand Corporation ran tests that showed the one and zeros in computer language could be used not only to represent numbers but also symbols. Between 1958 and 1960 psychologist Frank Rosenblatt of Cornell University modelled the Perceptron computer after the human brain. He “trained” it to recognize the alphabet. The chase was on to develop “neuron networks” of computer processors.
The human brain consists of more than 100 billion nerve cells (neurons) through which the brain’s commands are sent in the form of electric pulses. It can process many operations at the same time (such as thinking, talking and walking at the same time). This is called parallel processing. Computers follow sets of logic steps, procedures called algorithms. Fast computers perform roughly 10 billion calculations per second. Supercomputers use multiple processors to follow several algorithms simultaneously.
Back to the power of reasoning
When IBMs Deep Blue defeated world chess champion Gary Kasparov in 1997 it was a boost for AI developers. Today, a host of “smart devices” can recognize postal codes, patterns, symbols, handwriting, voices, etc. But no computer has yet mastered “plain, common sense.” Computers, it seems, can talk to each other but not to humans.
If the computer is to think like humans then its brains should be developed to be like that of a human. So, instead of using digital processors, scientists have developed silicon chips that work in analogue mode, the way a human brain cell works. A computer at the Argonne National Laboratory in Illinois used this mode of operation to process highly abstract problems, crudely approximating human reasoning.
The androids
The idea of personal assistant robots are not too far off, perhaps. But how will these human-like robots, called androids, behave and how will they be governed? Won’t they “take over the world?” If the robot laws of Isaac Asimov is followed, we’ll be safe.
1. Asimov’s first law is that robots may not harm humans either through action or inaction.
2. They must obey humans except when the commands conflict with the first law.
3. Androids must protect themselves except, again, when this comes into conflict with the first law.
Asimov added a fourth law, the Zeroth Law:
0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.
Which one is the computer? Computers talk to each other easily but not to us. Is there something more we should know about artificial intelligence?
Factoids
Alan Turing (1912 – 1954) was born and studied in London but earned his doctorate from Princeton University in the US in 1938. During WW II he deciphered the German Enigma codes. It played an important role in the victory of the Allies. He committed suicide by ingesting cyanide.
Well-known science-fiction writer Isaac Asimov also wrote mysteries, studies of the Bible, interpretations of Shakespeare and informative articles on chemistry, astronomy, biology and mathematics. He also laid down rules for the future androids.
It takes the human brain approximately one-half second to process and act on an input. Even average computers need less than half that time. But computers cannot process the extremely complex processes of thought creation and emotions… yet.
Open a birthday card, listen to Happy Birthday – and throw the card in the bin. You’ve just thrown away more computer power that existed in the whole world before 1950. Computer power is being developed at a staggering speed.
The word “robot” comes from the Czech robota, which means labor. Playwright Karel Capek introduced the word robot in his 1920 play R.U.R. – Rossum’s Universal Robots.
In 270BC ancient Greek engineer Ctesibus made organs and water clocks with movable figures, effectively producing the world’s first robot.
Charles Babbage (1792-1871) is the father of the computer. He did not complete his analytical computer because he couldn’t raise finance for it. Just imagine if there was proper crowdfunding in his days. Who knows how far technology would have advanced by now?!