Saturday, June 11, 2016

Artificial Intelligence - Thoughts in Philosophy

Recently I've been reading more and more of prominent scientists and engineers that warns against the dangers of artificial intelligence. It got me wondering...

What is intelligence anyways?

When I dissect this to its minimal form, I feel like intelligence is simply the level at which one is able to change the physical world into a form that satisfies a certain need. Or at least to be able to conceive of a way to do so.

With this definition, we, as humans, definitely have a high level of intelligence, proven by how much we've managed to re-shape our world to our needs. Even a paralyzed man, with no means to physically alter the world, can still, if his mind is intact, conceive of ways to do so, Stephen Hawking is a good example.

So it could be said that intelligence is the level at which one can conceive of ways to physically alter the physical constructs of his environment.

Well then, could a computer ever be intelligent?

There's a missing piece of the puzzle here. You see, being able to conceive of ways to change your surroundings implies that you also have motives to do so. In fact, there would be no point to have this capacity, as without a need to be fulfilled, one would never exercise such capacity even if it had it.

You need an objective to decide what change must be made.

Without an objective, you'd be at best randomly conceiving ways to change things. There's things in our universe which seem to exhibit such properties: the wind, the planet's core, thunder, fire, etc. We tend to regard these as non intelligent phenomenon.

This means my definition is incomplete. Intelligence is the level at which one can conceive of ways to physically alter the physical constructs of his environment in the way he wants.

To want anything, one must have needs. Thus, you can not be fully intelligent if you don't have needs. Similarly, you can not be fully intelligent without the ability to conceive of ways to change the world.

Is that all?

Actually, no. I mentioned earlier that you needed not have the capacity to alter the world, simply to conceive of ways to do so, but, I don't think that's totally correct. To be truly qualified as intelligent, you'd need to be observably intelligent. Maybe plants have a hundred fold our capacity to conceive ways to change the world, and do possess needs of their own, but alas, with their limited ability to apply those conceptions, one would be hard pressed to ever qualify them as intelligent.

Finally I say, Intelligence is the level at which one can observably alter the physical constructs of his environment as to satisfy his needs.

Should we be worried about intelligent machines?

To create an intelligent machine, one would need to:

  1. Create a machine with needs.
  2. Create a machine that conceives of ways to fulfill needs.
  3. Create a machine that can physically apply preconceived ways to alter the environment.

With that perspective:

  • We should be very careful with #1.

The reason I say that is, if a machine had needs, chances are they would conflict with our own, and that's absolutely when you get in a dangerous spot. Really, we should avoid that one completely, it does not even provide us with any benefit.

  • We should be very careful combining #2 and #3.

This one is more subtle. Ideally, we'd want to tell the machine, this is what I need, and it would go off, think of a way to meet it, and go ahead and execute on it all by itself. That's when you get into those sci-fi movie tropes, of the machine that enslaves all humans because it figured out that was the best way to create world peace. So I'd say, we'd want to keep these two separate, so that we can audit all conceived ideas first, before giving it to another machine to execute.

Conclusion

In the end, I think that the scientists and engineers who are warning us of the potential dangers of AI are mostly right. Given we can and ever do create a machine with full observable intelligence, it would definitely have the potential to put us all at risk. Having said that, I think there's great opportunity for improving our lives if we could build independent machines that each had partial intelligence, and thus, it is probably worth it to keep research going towards these goals.

No comments:

Post a Comment