On robots and love (not as boring as it may sound)

In my tutor session today a fascinating subject was broached, a subject which I have been enamoured with for a long time, that of the future of Artificial Intelligence.

As I found out last week in a lecture, the term AI was coined by a computer scientist named John McCarthy, for homework I had to read a story called ‘The Robot and the Baby‘ by the very same man.

This story speculates on the future of AI and robots in general. To what extent will we use robots in the future? Can robots indeed be trusted? In the story the robot in the title is made to appear monstrous to humans so that children will not become attached to it, however with the aid of a few garments of clothing it is able to fool the baby with a simulation of love. This raises the question of what is love? If a metal machine’s simulation of love is able to convince a human then is love purely material? I mean, if one thinks about it, people throw the word around all the time and not just regarding people, “I love my iPod!” is one I have heard a lot. Humans are infinitely capable of ‘loving’ inanimate objects, why not animated ones such as advanced robots?

This thread takes me to an Isaac Asimov short story I read a few years ago entitled ‘A Boy’s Best Friend’. In the story a boy is playing with his mechanical dog ‘Robutt’ and he appears to love it and it appears to love him back, they run together, Robutt jumps up at him and squeaks with excitement. Then the boy’s dad buys him a new dog, a ‘real’ dog. The father’s argument is that the dog is alive, it has feelings, whereas Robutt is just wires and programming. That the dog will actually love the boy and not just pretend to. What I found interesting was the boy’s reaction…”But what’s the difference how they act, how about how I feel? I love Robutt and that’s what counts.”

Along with love, there is also the issue of trust. When intelligent robots are integrated with society, as in Asimov’s work, will we as a society be able to trust them? I personally believe that society as a whole will find it incredibly difficult to live alongside such machines. Sure they may be a certain number of open-minded people who embrace them from the start, but I believe it will take many years even after AI is advanced enough for robots to function entirely independently, for human society to embrace them. Which is somewhat unfair if one really thinks about it as machines are more trustworthy even than humans. A machine is not capable of abstract though, it cannot become psychotic or operate on base desires such as greed or envy. Machines do exactly what their masters tell them to do, with a human servant for example there is always a chance, however minute, that he/she will steal, this is due to greed. A robot servant on the other hand sees no value in material goods or has any such emotion, or any emotions for that matter. With humans there are no guarantees, there is no black or white, with robots that is all there is.

This of course can also be a bad thing, robots do exactly what one tells them, one can see how this may become a problem. They have no judgement, if a situation changes rapidly they have no way of adjusting, Isaac Asimov’s robot stories are full of cases where robots are carrying out their orders exactly, but end up breaking human laws and generally causing disruption. His well-known three laws of robotics are set down in order to protect mankind and to ensure servitude, however in his stories it is often found that they are by no means foolproof. There are many and more loopholes and ambiguities and, as robots become more sophisticated, they begin to ‘interpret’ the laws, rather than blindly follow them.

This leads me to my conclusion (that and the lecture I have in ten minutes) and mankind’s greatest fear, that machines become so advanced and so sophisticated that they in fact become self-aware. Computers at the moment are stupid, but they can be both stronger and infinitely faster than human beings. If it should come to a point that, like ‘Skynet‘ and countless other iterations in the media, machines become sentient. They would have absolutely no problem in dominating humanity. The question that comes to my mind is, is it worth it? Is it worth the risk developing AI, should we stop at a certain point before the masters become the servants (or worse extinct!). Then again when have we, as a race, ever been able to stop?

Adantur out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s