'…why, if a robot that we know to be emotionally intelligent, says 'I love you' or 'I want to make love to you,' should we doubt it? If we accept that a robot can think, then there is no good reason we should not also accept that it could have feelings of love and feelings of lust. Even though we know that a robot has been designed to express whatever feelings or statements of love we witness from it, that is surely no justification for denying that those feelings exist, no matter what the robot is made of or what we might know about how it was designed and built.'
-David Levy, Love and Sex with Robots
I’m writing a book about sex robots. It’s 20,000 words long, one of 404 Ink’s Inklings series; you’d think that would be enough space to put all the thoughts a person can possibly have about sex robots. You’d be wrong. Forgive me for writing here about something that I can’t make space for in the book.
I’m fascinated the concept of sex robots from both a philosophical and a queer feminist perspective, but it’s the philosophical points I can’t get away from. I think this is because so much of the discussion around AI and the future promise of AI-enabled robots pretends that the mind-body problem - one of the most enduring problems of philosophy - simply doesn’t exist.
One of the most darkly fascinating books on this topic, and a good example of the above, is David Levy’s Love and Sex with Robots. My copy of this book is replete with Post-Its, and not in a good way. To briefly summarise the book: David Levy argues that we already have intimate relationships with inanimate objects; we spend inordinate amounts of time with our digital devices, we have strong emotions about our computers. He argues that we also love things that are not human; we love our dogs, cats and lizards; we feel powerful emotional attachments to belongings; we anthropomorphise almost anything that moves. He then goes on to argue that as artificial intelligence advances, and robotics advances to keep up with it, the above points make it inevitable that humans should fall in love with and marry robots, and that this will fundamentally change how we understand the very concepts of love and sex. He says that robots will be perfect lovers and partners, teaching us humans “more than is in all of the world’s published sex manuals combined”, and that marrying and fucking them will become completely normal. He says this will happen by about 2050.
Levy also states several times a version of this: if an AI-equipped robot tells you it loves you - or that it’s happy, or sad, or angry - then you have little reason to believe it isn’t actually having those feelings. Implicit in this argument is the belief that at some point, artificial intelligence will advance to the point where it quite literally has a consciousness. This is, of course, the prevailing belief in the AI sector, or at least the one voiced by its most rich and obnoxious proponents. But there is a problem with the logic of trying to build a computer advanced enough that it has metacognition; a reflexive consciousness, an understanding of its own mental processes. This particular set of issues is known in philosophy as 'the mind-brain problem'.
There are a variety of theories that explore how the mind and the brain interact, or whether they are in fact distinct things rather than different aspects of the same entity. However, the one that makes sense to me is that the physical brain and the non-physical mind are two distinct entities, which nonetheless have a causal relationship (for those wanting a nice bit of terminology, this is Cartesian Dualism). This means that they are different things, but can affect each other; a head trauma resulting from a car accident can change the way you think, and in the other direction, the feeling of stress can cause physical distress and issues in the physical body (not the brain exclusively, but the body more generally). But this raises another question: if the mind isn’t physical, how did it come to exist? If we accept that we humans have metacognition in a way that (most) animals do not seem to have, then why is this the case?
It’s probably a good idea here to confirm what we mean by a ‘mind’; in the words of K.T. Maslin, a mind is our 'thoughts, beliefs, emotions, sensations intentions, desires, perceptions, purposes and so forth'. It is what we think, how we think and where we think. It is what makes us individual people, it is what sets us apart from animals and objects, it is what allows us to go about in the world with the knowledge that we are part of it, and apart from it.
The most convincing theory (in my opinion, at least) is the emergent theory of mind. Put simply, this theory says that the mind is an emergent proper of the brain; that our reflexive consciousness is a product of evolution, of the material nature of our world. Essentially, our minds have evolved to the point where we have metacognition in the way that animals don’t.
You might have noticed the obvious question here: How?
Well, we don’t know, which is why philosophers (and scientists) are still trying to answer this question, just as Decartes was in the 16th Century. And herein lies the problem for the AI industry: if we don’t know how it happened, how are we supposed to do it again, with technology?
The Kurzweils and Musks of this world will have you believe that if we just set our best programmers on it and keep going for long enough, AI will spontaneously develop metacognition—they might not state it as plainly as that, but if you have enough patience to read their books, you’ll see that this is at the core of it. If we 'evolve' synthetic brains enough, reflexive consciousness will occur. But this is completely illogical thinking. As far as we know, in the entire history of the universe this emergence of metacognition has happened exactly once, and we have little to no idea how. Is it not ludicrous to believe that we can almost accidentally make it happen again?
John Searle’s 1990 book Consciousness, Explanatory Inversion and Cognitive Science begins with an attempt at refuting the idea of artificial general intelligence (the type of intelligence that would let machines learn, understand and act like humans). This became known as the Chinese Room Argument. Searle asked us to imagine a man locked in a room with a book of rules. Pieces of paper are passed under the door; on these are written a series of Chinese characters. By referring to the rules in the book, the man learns that when a certain string of characters are written and passed under the door, his job is to copy a series of other Chinese characters onto another sheet of paper and pass that back under the door to whoever is out there. A response will come through to him, and he will continue the process, posting out his ‘reply’ time and time again. In this way, a conversation is had—but the man in the room does not understand Chinese, has no idea what the conversation is about, and in fact does not even know that he is having a conversation. He does not have the necessary type of intelligence to understand that the characters have a referent (that is, that they ‘mean’ something). All he understands are the computational processes he is required to undertake.
From this, Searle argues that an artificial intelligence, like the man in the room, will only ever learn the processes that it is required to make, and that we cannot say that if an artificial intelligence ever truly ‘understands’ what it is doing—and, from there, we can argue that it is not really intelligent, or certainly not does have meta-cognition. The man in the room and how he learns to give the right output is analogous also to training your dog; your beloved pet knows that if you put your hand out and make a noise in a certain tone, you want her to respond by putting her paw in your hands. If she does so, she’ll get a treat or, after the training period is over and you’ve stopped giving her treats, she’ll at least get a cuddle and some positive reinforcement. Your dog does not understand that what she’s doing delights you so because she is aping the human convention of ‘shaking hands’; she doesn’t even understand what a hand is, or her own paw, or that she is an active participant in something that could be termed an ‘interaction’. She just wants the damn treat.
The Chinese Room was Searle’s attempt to mock the Turing test, Alan Turing’s method for determining whether a computer can exhibit human-level intelligence. It’s worth noting that no computer has ever passed the Turing test, but Searle was attempting to show that even if it did, it wouldn’t mean that the computer actually had human-level intelligence—just like the dog doesn’t. To know how the Chinese Room works but still convince ourselves that the man inside actually understands and speaks Chinese requires a level of self-deception; with sex robots, this self-deception is even more intense, given that we know they will always be programmed to give us what we want. To make us believe that they do mean what they say, because our satisfaction is their entire reason for being.
You have to ask yourself what the forces are at play here. Why would a person be so desperate to convince themselves that a robot genuinely does have thoughts and feelings, and the sex robot they’re hoping to treat as a wife really does love them back? I can’t help but think this is an attempt to escape the messy and difficult world of human interaction. Relationships are challenging, and require change and compromise, and to work two people must choose to grow together over time. But other humans are where you will find real love, and intimacy, and affection, and solidarity. If you find that difficult, perhaps it’s easier to believe that an AI programmed to tell you it likes you really does feel that way towards you. And if it requires you to suspend your disbelief to a point of quite incredible self-deception? Don’t worry about it; after all, now you’ve a synthetic girlfriend that loves you completely. At least, so she says.