To many people, Artificial Intelligence (AI) is a science fiction concept. This very broad field is a part of our daily lives, although we may not realise it. Tim Urban recently wrote:
Artificial Intelligence is the intelligence exhibited by machines or software. It is a field of study in which the goal is to create intelligence. The main goals of Artificial Intelligence research include the ability to reason; to have knowledge; to learn; to process natural language and communicate; to have perception; and the ability to move around and to manipulate objects. General intelligence is among the long-term goals of AI researchers.
There are three major AI categories:
Artificial Narrow Intelligence: Artificial Narrow Intelligence (ANI) is AI that specialises in one area. It is machine intelligence that equals or exceeds human intelligence or efficiency, but in one specific area. Smartphone apps, spam filters, Google translate and Google search are all examples of ANI.
Artificial General Intelligence: Artificial General Intelligence refers to a computer that is as smart as a human across the board and that can perform any intellectual task that a human being can.
Artificial Super-intelligence: Oxford philosopher and leading AI thinker Nick Bostrom defines super-intelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. Artificial Super-intelligence ranges from a computer that’s just a little smarter than a human to one that is trillions of times smarter — across the board.
The field of Artificial Intelligence is interdisciplinary - various sciences and professions converge. Computer science, cognitive science, mathematics, psychology, philosophy, engineering, neuroscience, linguistics, artificial psychology, as well as other specialised fields, work together.
The field was founded on the claim that human intelligence can be so precisely described that a machine can be made to simulate it. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings with intelligence similar to that of humans. It also raises fear as it is impossible to know what might happen when we create artificial intelligence that exceeds the intelligence of the most powerful human mind.
Physicist Stephen Hawking recently warned that once artificial intelligence sur¬passes human intelligence, it could pose a threat to the existence of human civilisation. Elon Musk has voiced similar concerns.
In response, futurist Ray Kurzweil points out that if Artificial Intelligence becomes an existential risk, it won’t be the first one. He says technology has always been a double-edged sword, since fire kept us warm but also burned down our villages. Kurzweil also believes we have enough time to devise ethical standards before we achieve human-level Artificial Intelligence.
Universities and companies are working on Artificial Intelligence safety strategies and guidelines, including clearly defining the mission of each AI program and building in encrypted safeguards.
According to Kurzweil, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. He points out that we are already a human-machine civilisation and that the best way to avoid destructive conflict in the future is to continue the advance of our social ideas, which already has great violence.
According to Geoffrey Shmigelsky, Artificial Intelligence is one of the least-understood and most underestimated fields of technology:
Sources and further reading:
COPYRIGHT © 2015 Singularity Institute Africa