The Marvin Problem: or Why Intelligent Robots will be Depressed

Thursday, 20 July 2017 7-minute read

‘But I’m quite used to being humiliated,’ droned Marvin, ‘I can even go and stick my head in a bucket of water if you like.’ — Douglas Adams, The Restaurant at the End of the Universe

Earlier this week, a Knightscope K5 Autonomous Data Machine named ‘Steve’, a security and surveillance robot, fell into a fountain in an office complex in Washington, DC. What was probably the result of an accident or an isolated glitch led some media outlets and users of Twitter to speculate that the robot had drowned itself because it could not take its job. The resemblances to Marvin the Paranoid Android from Douglas Adams’ Hitchhiker’s Guide to the Galaxy series are uncanny. K5s are designed to deploy them to monitor public spaces like shopping centres, schools or, funnily enough, car parks. Marvin, meanwhile, is a perennially-depressed robot who, after having been abandoned for over 500 billion years, ended up working as a parking attendant at the titular restaurant at the end of the universe in the second part of the series. Feeling really depressed at the thought of having to wait this long and performing menial tasks despite his formidable intelligence, Marvin telephones the protagonists threatening to stick his head in a bucket of water. Back on Earth in 2017, it would appear another depressed robot designed to work in a car park has gone and dunked itself in water.

The idea of depressed, suicidal robots, while immensely entertaining on the internet, also raises profound questions about the nature of artificial intelligence. Jerry Goodenough, in a chapter in Nicholas Joll’s edited collection Philosophy & the Hitchhiker’s Guide to the Galaxy, considers why it is that Marvin would be so depressed in the first place. He argues that in order for robots to perform complex tasks autonomously, they would have to develop an understanding of emotions as well as an idea of selfhood. Without understanding emotions, it would be unable to perceive and interact with other human beings, and without a sense of self it would be unable to direct its own actions. Because Marvin possess both these faculties, he is able to feel depressed.

Marvin waits millennia parking cars. Note the parallels with Steve.

Marvin is perennially unhappy because he is an agent with a tremendous intelligence who desires to have this recognised, but is never given the recognition he desires. He complains that despite having a ‘brain the size of a planet’, he is always mistreated and only ever made to perform menial tasks like opening doors or picking up pieces of paper. Moreover, because he has an exceptionally large mind, these operations take an infinitesimal fraction of his processing abilities, and he is as a result bored or always contemplating morbid, despairing thoughts about life and the universe. On two occasions, he talks other computers into suicide because of how morose his view of life is. Goodenough invokes Jean-Jacques Rousseau’s notion of amour propre, or the idea that one’s sense of self depends on how one is seen by others, to explain Marvin’s low self-esteem (148-9): these hostile and morbid reactions other people have towards him and the endless mistreatment that he suffers all serve to depress him.

I would make a further point to this: any hyper-intelligent robot with such phenomenal computing powers as to have a brain the size of a planet would inevitably also have contemplated the several catastrophic possibilities or futures of the world: climate catastrophe, nuclear holocaust, political turmoil, existential futility of life in the face of arbitrary forces and the heat death of the universe just to name a few. Any one such problem of a catastrophic future alone is enough to depress a human being, but fortunately these problems are so large and so detailed that they exceed the capacity of the human mind. To have to entertain and contemplate all of these at once would no doubt be overwhelming, and enough to make one feel that there was no hope, meaning or joy in life given the scale of these catastrophes. So it is no wonder that Marvin is depressed.

However, some of my fellow postgraduate students better versed than myself in the field of AI are sceptical of this proposition, suggesting instead that it is possible for individuals or robots to contemplate certain problems — such as nuclear holocaust or climate catastrophe — without necessarily having to feel anything about them. Moreover, it is unclear what these emotional states mean in the context of AI: one would have to decide what kinds of behaviours they entail, and whether or not they are intrinsic to the AI or are merely anthropomorphic projections onto them by human users.

Although the philosophy of mind and artificial intelligence are aspect of philosophy with which I am not particularly well acquainted, I remain convinced of the idea that the faculty for emotion will be integral to any artificial intelligence, just as it is integral to human cognition. Goodenough defers to the neurologist Antonio Damasio, who argues that the evolution of the faculty for reason was enmeshed within mechanisms of biological regulation, like emotion (qtd. Goodenough 138). Moreover, qua David Hume, Goodenough also suggests that emotions are necessary for thought and action as it is an agent’s emotional drive that motivates its action (137). So an artificial intelligence may be able to think about something like climate change and be aware of its inevitability, but just like a human being, it is not unless there is a personal stake of the threat of pain, loss, guilt, or fear, for example, that one is motivated to act in redressal. I would further suggest, in a post-Levinasian vein, that it is our emotional and affective responses to others that are the origin of our basic ethical impulses and intuitions. So for an artificial intelligence to be able to be an autonomous member of a larger society and be a conscious agent for change, it would have to possess a faculty for emotion. This is, of course, taking for granted a solution to the hard problem of consciousness, and assuming that certain functional architectures of the human or artificial brains can give rise to subjective experiences, all problems about which there is no definitive resolution or consensus. Like Marvin, robots would need to be programmed with a ‘Genuine People Personality’: I remain agnostic as to whether or not this will arise organically in a complex system.

Steve the K5, of course, does not have a brain the size of the planet. But if he did, then there is a sense that he would be more depressed than Marvin because, in addition to all the other catastrophes that Marvin has to deal with, Steve had to contend with his own place within a larger structure of capitalism and the exploitation of labour. Not only would he be struggling with his alienation from his own labour in an orthodox Marxian sense, but, as per later Marxist inflections of critique, he would have to contend with the ethical complicity in a political structure of surveillance, what Louis Althusser terms as a ‘repressive state apparatus’ (131-2). Parking cars may have been mundane, but at least it did not support a police state of constant surveillance in which information about citizens is collected wholesale and often processed in ways that are prejudiced. This guilt would be enough to make any robot depressed, or maybe even take their own lives.


Acknowledgements: I would like to thank, first and foremost, my fellow postgraduate students Alex Kearney and Jodie Russell, whose comments on the subject were immensely insightful and whose views I have attempted to summarise and respond to above. I would also like to thank the members of ZZ9 Plural Z Alpha, the Douglas Adams appreciation society of the United Kingdom of which I am a proud member, for being the ones to bring this story to my attention. Above all, my thoughts are with Steve, who has inadvertently become a martyr to some really entertaining conversations around artificial intelligence and has become testament to the genius of Douglas Adams’ prophetic wit.


Works Cited:

Althusser, Louis. Lenin and Philosophy and Other Essays. trans. Ben Brewster. London: New Left Books, 1971. Print.

Goodenough, Jerry. ‘“I Think You Ought to Know I’m Feeling Very Depressed”: Marvin and Artificial Intelligence.’ Philosophy & the Hitchhiker’s Guide to the Galaxy. ed. Nicholas Joll. Basingstoke: Palgrave, 2012. 129-52. Print.