Some of the most fascinating stories in science fiction center around artificial intelligence, or AI. One of the most famous examples is Data from Star Trek: The Next Generation, who also fulfilled the Pinocchio trope of being an android who wanted to be more human.
Currently, an interesting AI is featured on CBS’ hit series Person of Interest. On it, “The Machine” -- actually an AI program lurking in cyberspace -- was originally programmed to ferret out terrorist activity before it happens. The premise of the show is that its creator, Harold Finch, realized The Machine was also finding the Social Security numbers of people the government deemed “irrelevant.” These are people Finch decided needed to be helped anyway. So he enlists a former CIA officer to help out.
I’ve written about the show before, so I won’t reintroduce all the characters. What’s interesting is that Finch programmed The Machine too well. At the end of the second season, it was obviously working independently of Finch’s commands and by the end of the third season had apparently “grown” to the point where it is an unseen but beginning to be fully realized character. In essence, it is free of any of Finch’s constraints and can continue growing on its own.
However, The Machine is confined to the World Wide Web. While that’s pretty wild, it’s not like Data, who inhabits an android body and looks almost human.
In between these two shows was the Battlestar Galactica reboot which introduced a “breed” of Cylons that not only looked human and aspired to be human, they had evolved -- it’s the only word I can come up with -- to the point where they seemed truly alive.
With just these three examples, Data learned to be more than merely a tool and chose to be one of the best Starfleet officers Star Trek has ever seen.
On the other end of the scale, most of the Cylons hated humans and tried to wipe them out, although there more shades of gray than not.
Person of Interest’s producers have claimed that The Machine is benevolent because it has decided to stick to its mission of saving lives.
Let me add a fourth: and give you something more to chew on. The television version of Stephen King’s Under the Dome novel -- which departs from its literary cousin in several ways -- has just asked the question of whether the titular dome covering the fictional town of Chester’s Mill, Maine, is alive. The character of Julia Shumway -- a journalist! -- begins to wonder that when she realizes that while the dome has cut them off from the rest of the world and caused all kinds of problem, it’s also protected them from a military strike and provided rain just when they needed it.
So, what is life?
Is this just super- (or supra-) programming, or are these machines and AIs alive? Science fiction (and I believe fantasy, too) allow us to ask such questions and give us pause as we hurtle toward our own future.
io9.com, a website/blog I love to follow for both science fiction/fantasy and real science, recently featured a 9-minute video from PBS Digital Studios called The Rise of Artificial Intelligence. It features Watson, the computer that played on Jeopardy!; Siri, the voice recognition search program built into the recent iterations of the iPhone; and various other robots and AI programs.
Those interviewed in the video point out we still have a long way to go before creating Datas of our own. Vision is a problem, as are -- although getting much better -- natural language recognition and object manipulation. The video even features things we are beginning to take for granted: ATMs we can feed our checks into that recognize how much we’re depositing and from whom; Google Translate (which I’ve used on occasion), that according to one scientist, does a better job than “machine translators;” and recommendation systems on everything from YouTube to Amazon, helping us to figure out what we want next.
The goal, though, as Ernest Davis puts it in the video, is to “build an AI system that is equal to humans in all respects ... and, presumably, has consciousness in some sense.”
The problem, of course, is that such systems would have to mimic the human brain, which is far more complex than any computer and/or program to date. Although some of us have problems doing so, most humans are able to discern what they need to know out of all the “noise” around us. Give a machine enough rules, and it might start coming closer, one scientist said, but that it’s almost impossible to write all the rules humans follow -- naturally -- for a machine. Instead, the idea is to build an AI system that can teach itself. In other words, what do we have to give a machine to let it go off and learn on its own?
And there’s another path: the merging of man and machine. I’m not talking The Six Million Dollar Man; I’m talking Chuck Bartowski. Or how about taking a scan of someone’s entire brain and uploading it to a computer?
Either way, what would it all mean? Would we be at war, like in Terminator or Battlestar Galactica; or would we work beside them, a-la Star Trek and the upcoming FOX series Almost Human?
If we ever get there -- and it’s not really so farfetched anymore -- there are a lot more questions to be answered. Will they be our slaves? Our “children?” Our equals?
The ethical questions are mind-boggling and need to be answered sooner rather than later.