Artificial Intelligence

Artificial intelligence (AI) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer.

Although AI has a strong science fiction connotation, it forms a vital branch of computer science, dealing with intelligent behavior, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become a scientific discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games.


Schools of Thought

AI divides roughly into two schools of thought: Conventional AI and Computational Intelligence (CI).

Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:
  • Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
  • Case based reasoningBayesian networksBehavior based AI: a modular method of building AI systems by hand.
Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include:
  • Neural networks: systems with very strong pattern recognition capabilities.
  • Fuzzy systems: techniques for reasoning under uncertainty, has been widely used in modern industrial and consumer product control systems.
  • Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g. genetic algorithms) and swarm intelligence (e.g. ant algorithms).
With hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R.

A promising new approach called intelligence amplification tries to achieve artificial intelligence in an evolutionary development process as a side-effect of amplifying human intelligence through technology.




Philosophy of Artificial Intelligence

The strong AI vs. weak AI debate is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness can not be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of Functionalism. In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence.


Ethical Issue of AI

Artificial Intelligence, or AI, is a somewhat recent (1960s) development in computer science. If sentient AIs, as depicted in many science fiction movies, became real, many ethical problems would derive from that.
  • Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
  • Under what preconditions could such a simulation be allowed to happen at all?
  • Wat are the possible criteria for a computer, whether it simulates a brain or not, to be considered sentient or sapient? Is the Turing test applicable?
  • Can a computer that must be considered sentient ever be turned off?
  • In order to be intelligent does AI need to replicate human thought, and if so, to what extent (eg. can expert systems become AI)? What other avenues to achieving AI exist?
  • Can AI be defined in a graded sense (eg. with human-level intelligence graded as 1.0)? What does it mean to have a graduated scale? Is categorisation necessary or important?
  • AI rights: if AI is comparable in intelligence to humans then they should have comparable rights. (As a corollary, if AI is more intelligent than humans, would we retain our 'rights'?)
  • Can AIs be "smarter" than humans in the same way that we are "smarter" than other animals (Strong Artificial Intelligence)?
  • Designing and implementing AI 'safeguards': it is crucial to understand why safeguards should be considered in the first place; however, to what extent is it possible to implement safeguards in relation to a superhuman AI? How effective could any such safeguards be?
  • Some may question the impact upon careers and jobs (eg. there would at least be potential for the problems associated with free trade), however the more crucial issue is the wider impact upon humanity.
  • Technological singularity
A major influence in the AI ethics dialogue was Isaac Asimov who fictitiously created Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be dublicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.


Expectations of AI

AI methods are often employed in cognitive science research, which tries to model subsystems of human cognition. Historically, AI researchers aimed for the loftier goal of so-called strong AI—of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering. Recent research areas include Bayesian networks and artificial life.

The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are routinely used to augment or to replace professional judgment in some areas of engineering and of medicine.

Even though a substantial amount of AI functionality exists in everyday software, some misinformed commentators on computer technology have tried to suggest that a good definition of AI would be "research that has not yet been commercialised". This happens because when AI gets incorporated into an operating system or application it becomes an understated feature.

Related Posts by Glossary



2 Thoughts:

Ian Parker

Monday, June 26, 2006 5:56:00 PM

I think you should have a look at my blog in

http://ipai.blogspot.com

I view AI in terms of natural language. My reasons are the following.

1) A machine such as you describe is a logical impossibility, since if one were ever to be built it would become a spider thereby producing super intelligence.

2) The most successful AI program to date is Eliza which does a very similar job to Google. We often when we are speaking loosely talk about "Intelligence" and "Knowledge" synonomously. Computers already have more knowledge than they need. The task is to order it.

Note - The concepts need the occaisional use of a second language to clarify meaning. Spanish is used in the blog.

The other problem is the understanding of space.

Ian Parker

Monday, June 26, 2006 5:56:00 PM

I think you should have a look at my blog in

http://ipai.blogspot.com

I view AI in terms of natural language. My reasons are the following.

1) A machine such as you describe is a logical impossibility, since if one were ever to be built it would become a spider thereby producing super intelligence.

2) The most successful AI program to date is Eliza which does a very similar job to Google. We often when we are speaking loosely talk about "Intelligence" and "Knowledge" synonomously. Computers already have more knowledge than they need. The task is to order it.

Note - The concepts need the occaisional use of a second language to clarify meaning. Spanish is used in the blog.

The other problem is the understanding of space.