From the moment the first trailer for Tom Hooper’s Cats appeared online, discussions began around the sinister psychological divide between illusion and reality. Much was made of the film’s ‘digital fur’ technology, but the end result was ‘human-cat hybrids romping around with [actors’] faces, and it somehow manages to feel both a little too real and a little too off.’ We feel revulsion when confronted with this fine line between human and machine, the gap between the two named by Japanese robotics professor Masahiro Mori in 1970 as the ‘uncanny valley’.
In his original paper, Mori argued we should steer clear of making robots that imitate humans too closely. He described how we feel increased empathy for manmade objects such as robots as they look more and more human, but only up to a certain point. When the likeness becomes too close, the imperfections become disturbing; our empathy falls away and we feel a sharp revulsion. That’s the uncanny valley.
In a 2011 interview with WIRED, Mori said he believes it’s possible to bridge the uncanny valley, but it’s better not to try. This is the dilemma facing designers working on video games and films such as Cats – should you aim to leap over the uncanny valley and create a perfect imitation when the slightest slip-up can plunge you into the monstrous abyss? And is it even ethical or desirable to create a perfect replicant?
As a computer science graduate and someone who’s worked in a user experience role for a start-up that made apps connecting people, I’m a devotee of digital aids, computer games and podcasts, constantly craving time online to wander amongst the infinite.
But at the same time, as a person who has experienced the gaslighting, manipulation and boundary-stomping of emotional abuse, I’m unsettled by grotesque machine-learning attempts to ape our feelings and parrot them back or manoeuvre around them.
In a sense, we’re all manipulating our way through the world. Writing to persuade others is a practice in exerting subtle, if (I tell myself) usually benevolent, influence over others. Many of us meticulously curate our social media personas to fit the mould of who we want to be. We’re all taking the truth and bending the light, and worse, setting up online lemonade stands and selling ourselves by the cupful.
But when technology’s pulling the strings, there’s something more sinister at work. The first time I encountered a personalised ad, which memorised what I window-shopped for online and regurgitated the item as an ad on a completely unrelated website, I recoiled. I felt like my brain had somehow been invaded. Smartphones and TVs eavesdrop on our conversations and report our preferences back to base camp, if you haven’t adjusted your privacy settings to prevent it. These days, everything is personalised and customised, following us round the internet and taunting us with our own ideas, knowing what we want before we know it ourselves.
When we’re pitted against a machine mind, the playing field is no longer anywhere approaching level. We don’t like to be outsmarted by bits and bytes that have come back to bite us. We’re fighting calculations per second from an entity that can’t make mistakes (provided it’s effectively programmed, of course). It’s precision and perfection and hard angles striving to mimic messy, shapeless, hapless humanity. A grotesque, pulsating facsimile executed flawlessly and at lightning speed. Machine-made perfection is, ironically, often jarringly imperfect when it tries to pass itself off as one of us. The deception, the devil in digital disguise, is particularly loathsome.
Machine-made perfection is, ironically, often jarringly imperfect when it tries to pass itself off as one of us.
Our empathy is up for sale, even when we’re not being explicitly sold something. Gmail suggests replies, or predicts the next few words of your sentence – how useful! It feels like the algorithm has my back, shouldering emotional labour and making fraught email exchanges a tad less daunting. It’s also entirely optional – I can choose how often or seldom I take up its recommended phrases. By contrast, when the computer-aided anticipation of our thoughts and needs is taken too far or clumsy in its implementation, it’s like having your mind read by Hannibal Lecter’s long-lost cousin. Take Google Duplex for instance, a trial service where a human-sounding bot phones to make restaurant reservations for you, complete with ums and ahs. It has drawn criticism for blurring the line between interacting with a human and a machine. Even for something as simple as booking a table, control and consent make all the difference.
Maybe then, it’s not that we’re afraid of machines per se – it’s the deception, the creeping feeling that we’re having our minds read with evil intent, whether the perpetrator is man or machine. Chatbots have evolved well past the clumsy, hardwired responses of ELIZA, the 1960s program that emulated a conversation with a psychotherapist. Now, they learn on the fly as they converse with humans, mutating closer to our speech patterns by the minute to convey empathy in the customer service experience. But like impressionable children, they can be steered down a troubling path if the people they learn from set a bad example. Scientists at MIT managed to create Norman, an AI psychopath, by feeding it a particularly dark and disturbing Reddit thread.
Human narcissists and manipulators evoke the uncanny valley reaction in us as well, mimicking empathy rather than feeling it, and performing supposedly emotive actions and reactions that are almost-but-not-quite real. We pick up on the flash of narrowed eyes a fraction of a second before the laugh or smile, and subconsciously register we’re being tricked.
Those who encounter one of these personalities often can’t put their finger on what doesn’t seem right about an interaction. They just feel uncomfortable from the lack of genuine connection and the seeming x-ray vision of the person trying to read their mind and show them what they want to see. On the odd occasion the mask does slip ever so slightly, the underlying anger, callousness and untrustworthiness shines through.
This link between the uncanny valley and psychopathy was the subject of a study by researchers at the University of Bolton in the UK, who found that the ‘dead-eyed’ expression of many computer-generated characters (which were associated with psychopathic behaviour) would invoke an uncanny valley reaction in people perceiving these cues. The uncanny valley reaction is an early warning system telling us to run when in the presence of someone (or something) who means us harm.
The uncanny valley reaction is an early warning system telling us to run when in the presence of someone (or something) who means us harm.
If the uncanny valley is a warning of imminent danger, the other unsettling aspect of it is the uncertainty. Because we’re only catching flickers of unsettling expressions and behaviours, we’re not really sure what we’re dealing with and whether the threat is real or imagined. It’s the point where we second-guess ourselves, so we have to remain on high alert rather than having our suspicions confirmed. Our brains do not deal well with ambiguity.
The computer-made figures of my childhood seemed much friendlier and more easily classified. Back in the ‘80s and ‘90s, heavily pixelated human-like forms on screen and boxy in-real-life robots were cute rather than worrisome. There was a march towards realism, in fiction and fact, but we were impressed rather than repulsed.
Today, however, the transformation is almost complete. We get a creeping feeling when watching videos of humanoid robots performing parkour and terrifyingly advanced mechanical canines climbing stairs. They were much more palatable when they mashed themselves into walls in a never-ending loop.
In a time when robots were so knuckleheaded in real life, it was much easier to suspend my disbelief when it came to their fictional representations. Now, it’s another matter. I’ve seen enough Terminator movies to know what happens when androids can fool us into thinking they’re flesh and bone through too-accurate looks and behaviour.
Sure, we’re disturbed by the uncanny valley, but technology advances towards it regardless. The processes for altering images have grown so sophisticated, we now have to contend with the threat of deepfakes – neural networks replacing or manipulating a person in an image or video to create a convincing fraud. The future the sci-fi movies warned us about quietly crept in the back door while we distracted ourselves with Snapchat bunny filters.
In the end, the backlash from viewers to the Cats trailer saw much of the film’s CGI changed before (and indeed, after) the film’s release. Whether it was successful in avoiding the uncanny valley is up for debate, but it’s somewhat comforting to know human feedback is still being heard.
We’ve begun to wonder whether the human original is really needed anymore at all – First we’re tricked and then we’re superseded.
Where we go from here is anyone’s guess. Carrie Fisher was able to be posthumously inserted into the final Star Wars film; the recently-announced next step is the recreation of long-gone film star James Dean in Vietnam War movie, Finding Jack. Concerts performed by holograms are marching into the realms of possibility as well, and are another opportunity to resurrect stars some of us never had the opportunity to see. Death is no longer an obstacle; we’ve begun to wonder whether the human original is really needed anymore at all. First we’re tricked and then we’re superseded.
The uncanny valley makes us mistrust reality and question perceptions we thought we already had nailed down. For every near-miss there is a potential miss; if I have to double-take to realise I am being tricked, what else have I gotten wrong? Pull that thread long enough and sanity comes tumbling down.
What we need to ask ourselves is: after machines eventually do traverse the valley, what then? When they become capable of creating perfect copies of us, our minds might not rebel, but they damn well should – particularly with malignant personalities behind the wheel.