What do Terminator, Matrix, and the Will Smith version of I, Robot all have in common?
Well for starters, they all begin with the assumption that in the future robots or computers will develop a will, a self-awareness, or a set of ethics based on their own evolution/experience rather than what they were created with. But is this true? Can computers actually transcend their own data and interact with the universe in an intelligent way?
Of course for the purposes of this blog we're going to go with a formidable AI definition, not a weak one where robots are able to make small choices about stacking their parts differently in order to traverse a room. For movies like Terminator to come to fruition, we'd have to assume that computers and robots were able to move far beyond the normal threhold of AI and into something very close to a Human's ability to reason and choose. So, when I say AI- that's what I mean. An Artificial Intellegence that closely resembles a human being's.
The lectures I’ve been listening to by Hubert Dreyfus seem to indicate that on a purely philosophical level it would be almost impossible for true AI to spring into existence within the foreseeable future. The primary reason is that AI is programmed without a holistic ontology. Without a holistic reference for the universe, robots are limited to calculated “symbol shunting” rather than significant, meaningful interactions. So unless there is a significant change in the way we’ve been doing AI- we’re going to continue getting calculated rather than intuitive results.
But how to you create a robot or computer that has the ability to understand the holistic model of how the world works? You know, a better question might be this- how do humans understand the holistic form of life?
This is one of the hardest questions to answer, because as Heidegger noted, trying to describe the way we get around in the world is like trying to describe a really functional light source. We don’t even notice the light source until there’s something wrong with it. We tend to see, instead, the things that are illuminated because of the light. Likewise, our understanding of how the world works is only apparent to us when it’s not working correctly- when we’re disoriented or confused.
And if we’re not yet able to put much of a framework around our own experience regarding how the world works- I’m fairly certain that any framework we try to put around a machine will be inherently flawed.
So, the problem for AI programmers is not just figuring out the algorithms, software, and hardware needed to make some sort of self-aware creation. Their real problem is figuring out how to translate the context of the environment into a computer in a way that will allow it to mimic human understanding of how the world works. And since none of us are really clear on how we truly understand how the world works- it may be quite a while before robots figure it out.
I know I promised some religious implications/thoughts as well… But due to time constraints, I’m not sure that I have them figured out well enough to transcribe here. If you’ve got some religious ideas why computers/robots can or cannot become truly AI let me have ‘em. I’d love to hear from you.
Nathan Key likes to think about faith and philosophy and talk about it with others. He lives with his family in New Hampshire. He doesn't always refer to himself in the third person.