Nathan Key

Don't Panic

​
Contact Me

Will Artificial Intellegence Kill us All?

5/27/2009

 
Picture
What do Terminator, Matrix, and the Will Smith version of I, Robot all have in common?

Well for starters, they all begin with the assumption that in the future robots or computers will develop a will, a self-awareness, or a set of ethics based on their own evolution/experience rather than what they were created with. But is this true? Can computers actually transcend their own data and interact with the universe in an intelligent way?

Of course for the purposes of this blog we're going to go with a formidable AI definition, not a weak one where robots are able to make small choices about stacking their parts differently in order to traverse a room. For movies like Terminator to come to fruition, we'd have to assume that computers and robots were able to move far beyond the normal threhold of AI and into something very close to a Human's ability to reason and choose. So, when I say AI- that's what I mean. An Artificial Intellegence that closely resembles a human being's.

The lectures I’ve been listening to by Hubert Dreyfus seem to indicate that on a purely philosophical level it would be almost impossible for true AI to spring into existence within the foreseeable future. The primary reason is that AI is programmed without a holistic ontology. Without a holistic reference for the universe, robots are limited to calculated “symbol shunting” rather than significant, meaningful interactions. So unless there is a significant change in the way we’ve been doing AI- we’re going to continue getting calculated rather than intuitive results.

But how to you create a robot or computer that has the ability to understand the holistic model of how the world works? You know, a better question might be this- how do humans understand the holistic form of life?

This is one of the hardest questions to answer, because as Heidegger noted, trying to describe the way we get around in the world is like trying to describe a really functional light source. We don’t even notice the light source until there’s something wrong with it. We tend to see, instead, the things that are illuminated because of the light. Likewise, our understanding of how the world works is only apparent to us when it’s not working correctly- when we’re disoriented or confused.

And if we’re not yet able to put much of a framework around our own experience regarding how the world works- I’m fairly certain that any framework we try to put around a machine will be inherently flawed.

So, the problem for AI programmers is not just figuring out the algorithms, software, and hardware needed to make some sort of self-aware creation. Their real problem is figuring out how to translate the context of the environment into a computer in a way that will allow it to mimic human understanding of how the world works. And since none of us are really clear on how we truly understand how the world works- it may be quite a while before robots figure it out.

I know I promised some religious implications/thoughts as well… But due to time constraints, I’m not sure that I have them figured out well enough to transcribe here. If you’ve got some religious ideas why computers/robots can or cannot become truly AI let me have ‘em. I’d love to hear from you.


Jeff link
5/27/2009 10:14:50 am

A few theological questions:
An artificial intelligence would be less directly descended of Adam. Would it have inherited original sin?
In some sense, God would be the grandfather, rather than the father, to Artificial Intelligence. Would AI's have a different relationship to God than we humans? Would Salvation be any different? Would worship of humans be a stumbling block to them, a possible idolatry? Or would some level of respect be appropriate?
What about missionary work? What would it be like to bring the Gospel to artificial beings?

Nathan link
5/27/2009 10:58:16 pm

Jeff, these are GREAT questions.

I especially like the one about whether or not Robots would inherit original sin and whether or not they would react to God. I think I remember an Asimov story where a robot "finds religion." I'll have to look for it.

Chad Hogg link
5/28/2009 04:38:37 am

Disclaimer: IAAAIR (I Am An AI Researcher) Chris Cocca asked me to comment here.

It is important to distinguish between intelligence and consciousness. Thanks to clever programming, computers are now able to do many things that were once thought to require the kind of intelligence that only humans and some other animals possess: to play chess at a grandmaster level, to converse in natural language, to prove mathematical theorems, etc. However, we are no closer now than we were 50 years ago to endowing machines with feelings, dreams, desires, etc. As a dualist, I believe that this is actually impossible, and that thus the theological questions raised above are moot.

I have not read Dreyfus (but probably should), and have not been able to find a precise definition of an "holistic ontology". Etymologically, it sounds like it should be a synonym for an "upper ontology", which semantic web and knowledge engineering folks are working on. Whether or not their goal is achievable I do not know.

Regarding your "problem for AI programmers", there are a few people who still see this as the fundamental concern. In fact, there has been a small conference on Artificial General Intelligence held the last two years to try to re-interest people in the topic. The vast majority of people working on AI, however, do not see this as a problem. Rather, we are content to expand and improve the ways that computers can exhibit intelligent behavior and apply those to new problems. At the much larger and more prestigious International Joint Conference on Artificial Intelligence this year I highly doubt that there will be more than a handful out of the 331 accepted papers that make any attempt at this more general concept of intelligence.

For example, my work is primarily in the field of automated planning. Very broadly, this field concerns finding sequences of actions that achieve goals, given some state of the world. As you say, this is no more than the manipulation of symbols that describe the world and the preconditions and effects of actions. Nevertheless, it has found many useful applications.

Nathan link
5/28/2009 04:58:46 am

@ Chad- Thanks for joining the discussion! It's a great thing to have an actual AI programmer joining in!


Comments are closed.

    About Nathan

    Nathan Key likes to think about faith and philosophy and talk about it with others. He lives with his family in New Hampshire. He doesn't always refer to himself in the third person.

Powered by Create your own unique website with customizable templates.