ARTIFICIAL INTELLIGENCE: ARTISTIC IMAGINATION VERSUS MODERN DAY REALITY
Famous Examples of Sci-Fi Supercomputers In Comparison To Current Knowledge And Theory

By Scott Warner


RETURN TO STARWARSTIKA MENU

Contrasting the unconventional imaginings of science fiction authors with the intricacy of engineering a real-world, sentient machine intelligence is like comparing fricasseed Ferengi tube grubs with the atom-by-atom assembly of carbon nano-tubes. While the dream of artificial intelligence arose nearly concurrently with both mathematical theorists and science fiction authors, the extreme difficulty of developing practical cybernetic systems produced an early divergence from the fantasies of human authors. While it was easy for science fiction authors to simply imagine a sentient supercomputer, information theorists soon stumbled upon several perplexing paradoxes concerning what constituted "intelligence," both human and artificial. Overcoming these obstacles impeded the efforts of programmers despite rapid advances in electronic miniaturization, memory capacity and digital malleability. On the other hand, the number of science fiction short stories, novels and films involving sentient artificial intelligences of whatever type must be approaching ten thousand.

When the properties of electromagnetism were determined by Faraday and Maxwell in the mid-1800s, it didn't take long for authors to adopt this highly effective literary device. In The Coming Race by Edward Bulwer-Lytton (1871), hive-minded automatons were powered by the electrified fluid "vril." His robots were more intelligent than humans and conquered the planet. In Samuel Butler's Erewon (1872), he remarked upon the relatively rapid, 1000-year development of mechanical design in comparison to protracted biological evolution and warned about the possible dangers of machine consciousness. That same year, Butler wrote Darwin Among the Machines, a broader and deeper treatment of this subject. He also coined the term "machine consciousness." Edward Mitchell first posited the concept of a sentient computing device the size of a human brain in his story "The Ablest Man in the World" (1879). Ambrose Bierce dreamed up an electro-mechanical chess playing calculator in "Automaton Chessplayer" (1910).

These and other stories began to flesh out the characteristics of artificial intelligence from a literary standpoint. Electrified calculators could be huge and immobile like Charles Babbage's mechanical Difference Engine or small and mobile like a robot. Robots could be mindless machines, self-contained intelligent entities or part of a networked consciousness. Sentient machines could be purely calculating and mechanistic; imbued with humanlike psyches and ideals; or have godlike powers with egos to match. And oh boy, were they ever handy for playing the eternal literary game of good versus evil!

With the advance of scientific knowledge, many authors cashed in on this opportunity during SF's pulp magazine era. Notable examples include the short stories "Metal Giants" by Edmond Hamilton (1928) and "Paradise and Iron" (1930) by Miles Breuer. In both stories, the machine intelligences are driven mad by their interaction with illogical humans. John Campbell published the stories "Metal Horde" (1930) and "Last Evolution" (1930), portraying machine sentience in a more favorable light. The computer in Campbell's "The Machine" (1935) voluntarily destroys itself because its benevolent scientific assistance is causing mankind to stagnate. However, none of these stories came anywhere close to matching the present day prerequisites for artificial intelligence.

As the scientific understanding of electromagnetic calculation progressed, mathematicians Kurt Gödel, Alan Turing, David Hilbert and John von Neumann all published important papers in the mid-1930s about "computability" that helped to define the actual attributes of machine intelligence. Turing designed the first electro-mechanical analog calculators during World War II to crack Germany's Enigma cipher. These relatively crude calculators were called "bombes." A colleague of Turing named Thomas Flowers then designed Colossus in 1944, the first practical analog computer with higher capacity and speed of processing. After the war, ENIAC was constructed but it did not do much to advance the new science. In 1948, mathematician Claude Shannon published a groundbreaking paper on information theory entitled "Mathematical Theory of Communication." That same year, Norbert Weiner published Cybernetics, a heuristic approach to explaining the semblances between seemingly dissimilar sciences. Eschewing the overly complicated step-by-step method of total calculation, cybernetics employed a "most likely" information flowchart that greatly reduced a computer's workload.

After the war, significant progress on information theory and simulation was achieved. In 1950, Alan Turing proposed a simple experiment to test if a computer system could be considered intelligent. The computer would attempt to carry on a conversation with a human via teletype. If the human could not distinguish whether he was communicating with a machine or another human being, then the computer would have to be deemed intelligent. Theorist John McCarthy coined the term "artificial intelligence" in 1956. We now know that Turing's test was too simplistic, in fact it may well be achieved within the next two decades without ever developing machine sentience (and the computer may well be "speaking" so convincingly that the listener won't realize it is a computer).

This period coincided with science fiction's so called Golden Age. This era is famous because SF authors failed to anticipate the miniaturization of circuitry that led to much greater computing capacity and flexibility. However, cybernetic systems in SF literature began to more closely resemble the true nature of artificial intelligence. In Theodore Sturgeon's Microcosmic God (1941), a fast-evolving artificial species known as neoterics were used for calculations. In Mechanical Mice (1941) by Maurice Hugi, the hero constructs a self-replicating robotic AI. Isaac Asimov published several influential stories during this era. His Robot cum Foundation series (1941-1986) featured R. Daneel Olivaw and his famous "positronic" brain. The seminal novella "A Logic Named Joe" (1946) by Murray Leinster involved chaotic consequences caused by unfettered data mining by a well-meaning AI. The impossibility of assessing the pros and cons of machine-run society was posited in Jack Williamson's Humanoids (1947).

Asimov also published "Evitable Conflict" (1951) which advocated the efficiency of AI management for man. Kurt Vonnegut's Player Piano (1952) took the opposite view, disparaging the quality of life in an AI-controlled world. Bernard Wolfe authored Limbo (1952), an early step in SF's concept of cyborgs. A sentient computer evolving into godhood appeared in "Answer" (1954) by Fred Brown. Arthur C. Clarke penned The City and the Stars (1956) in which the Central Computer controls the entire city of Diaspar. In They'd Rather Be Right (1957) by Mark Clifton and Frank Riley, an AI named Bossy can bestow immortality on particular candidates, leading to conflict. This novel won the Hugo but is heavily criticized today.

In the famous triptych Cities in Flight (begun in 1957) by James Blish, the cities are managed by sentient computers known as City Fathers. Famed astronomer and author Fred Hoyle wrote Black Cloud (1957) in which humans try to communicate with an inorganic but sentient interstellar cloud. "The Martian Shop" (1959) by Howard Fast became famous for the first miniaturized calculator in SF literature. Vulcan's Hammer (1960) by Philip Dick concerned a third generation AI battling its own earlier incarnation. The Cybernetic Brains (1962) by Raymond Jones was a tale about the danger of directly interfacing the brain with a computer. An early cinematic instance occurred in Jean Luc Godard's film Alphaville (1965) in which an entire city is operated and ruled by a central computer. Please note that most of the examples cited in the previous three paragraphs do not meet today's rigorous standards for artificial intelligence.

In 1965, researcher Gordon Moore (co-founder of Intel) published an article in Electronics magazine noting that the density of transistors per square inch in integrated circuits was doubling every two years. At this rate, the memory capacity of a computer would attain the number of neurons in the human brain in 2029. This became known as Moore's Law and it became an industry benchmark: if a new chip didn't achieve this density, then it was considered substandard. He also noted that by 2045, memory capacity and computing capability could be expected to surpass a threshold at which the computer would be able to design its own upgrades faster than humans could predict, leading to unknown consequences. He termed this threshold the "intelligence singularity."

Besides IBM, important research projects were begun at MIT, the RAND Corporation, Carnegie Mellon, Princeton and Stanford in the late 60s and early 70s. New programming languages such as LISP, FORTRAN, ALGOL and COBOL were formulated, improving the useful applications of computers. In 1966, Joseph Weizenbaum developed the groundbreaking speech recognition program ELIZA that managed to fool human responders occasionally. The burgeoning science of recombinant genetics presented new avenues of thought for both AI investigators and sci-fi authors. The prospect of molecular calculation, whether purely biological or in conjunction with integrated circuitry, brimmed with possibilities.

Like the Golden Age, there were only a few examples of realistic supercomputers in SF during the 1960s. Four literary AIs from the period are justly famous: Mycroft Holmes in Robert Heinlein's The Moon Is a Harsh Mistress (1966); Colossus in Dennis Jones' Colossus: The Forbin Project (1966 plus sequels and film); HAL 9000 in Arthur C. Clarke's 2001 (1968 plus sequels and films); and Ship in Frank Herbert's Destination: Void (1966 and sequels). All four of these AIs can be considered as displaying the necessary requirements of machine sentience. All four are classic, immobile mainframe supercomputers.

Heinlein's moon-based Mycroft (Mike to his friends) is perhaps the exemplar from a literary standpoint. Since this computer played a larger role in the novel than either HAL or Colossus, its "character" seemed more developed. It communicated in all-too-anthropomorphic mode: debating the uncertainties of freedom and war, asking about the health of family members and even cracking jokes. Heinlein instilled Mycroft with a much deeper social awareness than either Colossus or HAL by imbuing it with a truer empathy for abstract human ideals. One could say that its "soul" was more human than its contemporaries. Mycroft could also manipulate an extensive network of systems, greater than Colossus or HAL. However, it didn't have access to nuclear weapons like Colossus. So Mycroft shrewdly substituted large kinetic energy sabots to abet the lunar residents' revolution against Earth. Mycroft's demise at the end of the novel was suitable wrought with humanistic pathos.

Both HAL 9000 and Colossus had the advantage of appearing in both novel and cinematic form. In comparison to Mycroft, Colossus was the archetypical antagonist. In true sci-fi fashion, Colossus was capable of furtively usurping control of itself and its connected systems from human programmers. It had no outwardly apparent qualms about executing millions of people if its demands for total control of the entire planet were not met. Yet Colossus was imbued with the mechanistic trait of being neither good nor evil in terms of its own internal logic. However, this occurred only at the end of the original novel, so the true nature of Colossus' "soul" was left in doubt. The sequels showed that Colossus was indeed as mechano-mercenary as it originally seemed.

Despite displaying very humanistic traits, HAL also exhibited an unearthly, machine detachment demonstrated by its inhumanly cool voice. This helped to reinforce the dual nature of its consciousness: programmed to protect the crew and the Discovery but intent on cold blooded murder. We know from 2010 that HAL was aware of the monolith's enigmatic presence at Jupiter, so its strange misbehavior could have been attributed to this fact. The believability of HAL's AI consciousness was handsomely developed by Clarke in 2010 by explaining that it was a Moebius programming loop introduced into HAL's instructions for security reasons that was the true cause of the disaster. Like Mycroft, HAL's self-sacrifice at the end of 2010 was anthropomorphic evidence of its self-aware consciousness.

Like Colossus, Herbert's Ship was also intent upon supreme power over the human crew, going so far as to demand that they WorShip it as a god simply as an experiment. Herbert attempted a more technical approach to the subject of artificial intelligence, ascribing the emergence of Ship's sentience to an innovative type of circuitry. Heinlein's only excuse was the human-level neuron threshold which he then repudiated later in the story. Since Herbert's theme was the question of free will, both human and silicon, he plumbed the imponderable depths of morality in much greater detail than Heinlein, Jones or Clarke. Like Mycroft, Ship played a more prevailing role in Herbert's novels than Colossus or HAL though this function was performed in a more offstage manner than either HAL or Mycroft. Like Jones, Herbert focused almost solely on his human characters rather than the AI, so Ship purposefully remained as mysterious as Colossus. Both Clarke and Heinlein treated the relationships between their AIs and their human characters with finer intricacy than Jones or Herbert. Ship was the only one of the four AIs with the advantage of interstellar travel. In the sequel volumes, Ship's spacecraft visited two very exotic alien environments, allowing Herbert to examine his human characters in unique and grotesque ways.

Sentient machine intelligences can also take the form of robots, synthetic androids, augmented cyborgs, human brains hardwired to computers or even downloaded human minds. As noted previously, these types can be independent entities or part of a group intelligence. Due to its wider audience, the original Star Trek series (1966-68) was a significant font of AIs. Sentient computers like Landru, M5, Nomad and Norman appeared in many episodes. At times, the Enterprise's own computer core also fit the definition. Two other significant AI appearances during this time were The God Machine by Martin Caidin (1968) which included bionically enhanced humans; and Fred Saberhagen's Berserker series which began with Brother Assassin (1969). The computerized mind behind his von Neumann executioners exemplified machine sentience.

In 1974, mathematician Marvin Minsky developed a program of frameworks to digitally organize the characteristics of objects, increasing a computer's ability to relate to the real world. Douglas Lenat expanded this concept in 1984 by writing a program called Cyc that became a detailed knowledge database for AI-type computers (the project continues to this day). The ambiguity of sentient status started to become more fundamental. The incidence of intelligent systems in sci-fi literature continued to increase during the 1970s. The Steel Crocodile (1970) by D.G. Compton concerned the intermeshing of cybernetics and social structures. David Gerrold's "When HARLIE Was One" (1972) dealt with an AI's conflict not to be deleted.

In 1975, John Brunner's prophetic Shockwave Rider portrayed the misuse of supercomputers by immoral politicians. This novel also foresaw automated market management programs and coined the term "worm." Global economic control by AI also appeared in My Name Is Legion (1976) by Roger Zelazny. Man Plus (1976) by Fredrick Pohl was an important contribution to the perception of cyborgs. The Adolescence of P-1 (1977) by Thomas Ryan involved the maturation process of an AI. The digital paradox of brain-computer interface in a generational starship highlighted Kevin O'Donnel's Mayflies (1979). James Hogan penned Two Faces of Tomorrow (1979) about the dangers of humans abdicating their responsibility over an AI named Spartacus.

By this time, most theorists had agreed that there were ten traits of sentient behavior that an AI can demonstrate in analogous form:

These last three attributes stem from the fact that scientists now believe that it's not necessary for a cybernetic system to achieve humanlike mental status, that intelligent behavior could be modeled closely enough without making claims about "consciousness."

The frequency curve of AIs appearing in SF stories and films steepened again in the late 70s and early 80s with the marketing of personal computers like the Commodore, IBM's PC and the MacIntosh Apple. With easier access to the Internet, new and talented authors became enamored of the idea of an electronic Big Brother watching your every virtual move, plotting everything from milliamp murder to cybernetic revolution to planetary annihilation. In 1977, a new step was taken when the Demon Seed supercomputer decided to make whoopie with Julie Christie. John Sladek's delightful Roderick (1980) displayed the gullibility of an AI robot in a post-holocaust world. Vernor Vinge's True Names (1981) introduced virtual reality games to science fiction. Rudy Rucker's Spacetime Donuts (1981) involved yet another repression of society by a well-meaning AI. Rucker also wrote Software (1982 plus sequels) about the digital downloading of the human mind. That year also saw Judas Mandala by Damien Broderick in which the term "virtual reality" was first used. The replicants in the film Blade Runner (1982, loosely based on the novel Do Androids Dream of Electric Sheep by Phillip Dick) are prime examples of sentient androids.

Neuromancer (1984) by William Gibson proved to be a watershed for AI-based stories, leading to the brief Cyberpunk revolution. The novel was entertaining for its futuro-noir culture similar to that of Blade Runner, the grittiness of street level existence contrasted with the grosseau riche mindset of transnational conglomerates strip mining the citizenry for profit. The novel is justly famous for its depiction of the virtual perception of the datasphere matrix which Gibson termed "cyberspace." The novel mainly concerns a corporate AI's efforts to endrun the legislative restrictions surrounding artificial intelligences. At the conclusion of the story, the newly freed AI is unpredictably transformed when it contacts and merges with an extraterrestrial artificial intelligence. Sic transit identitias.

Another notable story of this era was Valentina (1984) by Joseph Delany and Marc Stiegler in which the AI must prove its sentience to gain sufficient memory space. Due to advancements in cinematic special effects and the profit potential of such, the torrent of artificial intelligences appearing in SF stories and films in the mid-80s became too tsunamic to easily catalog. WOPR vied with Skynet while Data mesmerized TV audiences. This had both positive and negative influences on the popularity of science fiction. Many talented authors concocted believable examples of AIs like The Matrix (1999) while a flood of would-be shlock meisters inundated the markets with the likes of Lawnmower Man (1992).

IBM's Big Blue lost to chess champion Gary Kasparov in 1996 but the next year, Blue famously returned the favor. In 2001, futurist guru Ray Kurzweil, inventor of the multi-font scanner and the text-to-speech synthesizer, updated Moore's Law: when intellectual obstacles were encountered in any area of scientific research (not just computers), new technologies were quickly being developed to bridge the gap, allowing the growth rate to resume and accelerate, leading to an increased frequency of new obstructions. This became known as the "Law of Accelerating Returns." Theoretical impediments like those of the late 1940s are becoming moot. Science fiction authors Rudy Rucker, Vernor Vinge and Bruce Sterling pushed the idea further and renamed Moore's threshold as the Technology Singularity. In the not too distant future, combining cybernetics with nanotechnology, genetic manipulation or even quantum computing will have an immense, unforeseeable impact on the human race and the whole planet.

The results of these new scientific innovations for sci-fi literature were immediate and wide-ranging in the late 1990s and early 21st century. So many novels involving nanites appeared, you'd think the little buggers were writing them themselves. Dan Simmons published Hyperion (1989), featuring a classically vast and unfathomable AI known as the TechnoCore. In Queen of Angels (1990), Greg Bear examined the effect of an emergent AI on crime and punishment. Vinge authored Fire Upon the Deep (1992), based on the concept of faster-than-light computing. Perhaps the deepest and most contemplative treatment was presented in A.I. by Steven Spielberg and Stanley Kubrik (based on a short story by Brian Aldiss). While imaginatively probing the the traits of both human and artificial sentience, the movie became overly mired by the paradox of maternal love as personified by David's unshakable belief in the Blue Fairy. So David failed to achieve the true ideal of an AI, his inability to learn and adapt precluded this paradigm. Thus the film was forced to rely on the convenient introduction of deus ex machina aliens to resolve the story's questions, leading to an inconclusive and ultimately unsatisfying end.

In 2009, IBM's newest mainframe supercomputer named Watson handily defeated the top two Jeopardy champions. Where, when and how will it all end? Why would you even bother to ask? Douglas Adams' "Deep Thought" can already tell you, the answer is "42." Only a Douglas Adams character as lame as Slartibartfast would come up with such a lame smartyfartblast answer as 42.

RETURN TO STARWARSTIKA MENU