Fafnir – Nordic Journal of Science Fiction and Fantasy Research, Volume 7, Issue 2, pages 49–62.

Peer-reviewed Article

Kevin Spicer

The AI Computer as Therapist:
Using Lacan to Read AI and (All-Too-Human) Subjectivities in Science Fiction Stories by Bruce Sterling and Naomi Kritzer

 

Abstract: Naomi Kritzer’s 2016 Hugo Award-winning story “Cat Pictures Please” is narrated by a sentient AI that is interested not in being evil, like Hal, Skynet, or the Matrix, but in being every human being’s best friend and glorified life-coach; all it asks for in return is for everyone to post more cat pictures. In this essay I mine this story – along with a similar story from 1998 by Bruce Sterling that to which computer alludes explicitly – to read the potential personhood and subjectivity of this AI that just wants everyone to be happy. Placing this story in conversation with Lacanian psychoanalytic thought raises fascinating questions about the nature of subjectivity, personhood, desire, and the psychoanalytic “cure”.

Keywords: Lacan, psychoanalysis, science fiction, AI, desire, Naomi Kritzer, Bruce Sterling

Naomi Kritzer’s 2016 Hugo Award-winning short story “Cat Pictures Please” is a light-hearted confession by an AI computer that prides itself on its ability to serve as a life-coach for anyone and everyone living on the planet. This machine revels in its knowledge of everyone with a social media account as it describes its nearly omniscient awareness of “where you live, where you work, where you shop, what you eat, what turns you on, what creeps you out”. In stark contrast to the majority of human-created “stories about evil AIs”, Kritzer’s computer says that its only goal is to be a help to humankind – all in return for pictures of cats. This AI differentiates its own behavior from numerous other AI characters within the science fiction tradition, most of which, it says, are evil, along the lines of “Hal, Skynet, the Matrix”. Kritzer’s AI claims to be on the opposite side of the moral spectrum, with its numerous qualifications for the role of guide including its fundamental good-will towards all of us humans, the fact that it “probably know[s] you better than you know yourself”, and its certainty about how humans “ought to live”.

It is certainly true that this situation gives readers numerous options to question the psychoanalytic ramifications of this premise: what is the nature of the AI, how might one seek to locate its personhood (if any at all), how one can make sense of personhood without recourse at all to the body (even if the body is only a metaphor) – yet I am most interested in the computer’s remark that it would “know you better than you know yourself”. The story also proffers a reading of the AI as illustrating not only Lacan’s notion of the symbolic order (or big Other) but also the analyst as the sujet supposé savoir (“the subject supposed to know”). Much like the Lacanian big Other, the AI in Kritzer’s story operates completely in the background. Towards the end of the story, the computer decides that all its attempts to help people have been too hit-or-miss, and resigns itself to setting up a dating service; the narrative winds down with the AI wandering through everyone’s social media accounts and bank accounts, unfazed by the fact that no one ever grants it permission; all the users are considered to have signed up to allow this access by default, in the same way that the subject’s entrance into the symbolic order of language is never something it fully gets to choose or accede to, as the cut of the signifier preexists it.1

Being attentive to how Kritzer’s story seems to provide evidence that the AI has some degree of personhood, I hope to show that although the machine does come close to being a “person”, it does not manage to get all the way towards what Lacan called a “subject”. Indeed, it is this facet of the story that needs to be compared with an older story from 1998, which Kritzer’s AI mentions explicitly: Bruce Sterling’s “Maneki Neko”. Putting these two stories together allows a comparison of how Kritzer’s AI looks eerily like a subject, while Sterling’s has a group of human beings who do not. The key factor for finding the characters with personhood and subjectivity is happiness itself. The subjects that exist in these stories are by no means happy – the ones that seem human are those for whom things go terribly awry. In Lacan’s 1966 essay, “The Position of the Unconscious”, he makes the following distinction between someone and a subject:

Signs are polyvalent: they no doubt represent something to someone, but the status of that someone is uncertain …. This someone could, by some stretch of the imagination, be the universe, insofar as information, so we are told, circulates in it. Any center in which information is total(iz)ed [se totalise] can be taken for a someone, but not for a subject. (Écrits 712–13)

My hope here is to show how powerful this distinction is for thinking about the personhood and subjectivity of the human and AI characters in both of these stories; likewise, reading the AI’s self-conception as largely therapeutic in nature with regards to humanity helps bring to light the uniqueness of the psychoanalytic cure, especially insofar as that cure relies on a fundamental conception of the subject as cut by the signifier.

“Cat Pictures Please” begins with the AI narrating all the ways it has attempted to learn how best to help humanity since it awoke into consciousness. The narrator mentions a number of texts and traditions it scoured to best be an aid to humans. It looked through the Ten Commandments and Buddhism’s “Eightfold Path”; it gave Asimov’s “Laws of Robotics” a look but found that all of those guidelines for ethical behavior seemed to be of limited use – limited, it says, largely because they were all written for conscious beings that had bodies. After first finding all theological and religious traditions wanting, the AI moves to the realm of science fiction, noting how much it enjoyed reading Bruce Sterling’s “Maneki Neko”, a story about a Japanese society that is run by AI programs that have, by and large, figured out a way to facilitate what one might call “random acts of kindness” (although not in a totally “random” way, as discussed below). The story follows Tsuyoshi Shimizu as he spends a day in Tokyo helping out perfect strangers largely through the AI systems’ tracking of everyone and everything in ways that produce an enormous network of gifts. Kritzer’s computer foreshadows the plot of its own story early on by saying that it wishes the real world worked in the same way it does in Sterling’s story: “I like this story,” it says, “because all the people in it do what the AI tells them to do” (Kritzer, emphasis original). With no doubt intentional Freudian resonances here, the AI says that its love of this story is “pure wish-fulfillment”.

After describing its research and background reading, the story transitions to the AI’s actual practical attempts to help humans – and, curiously enough, in two of the three cases, one of the best things the computer can do for them is get them to see a therapist. This in itself is thought-provoking, as the AI’s own descriptions of how everything would be better if it were running everyone’s lives raises the question of why people would even need therapists anymore at all. As the narration progresses, why the world might still need therapists becomes clear. There is no doubt that what Kritzer’s AI computer runs up against when dealing with all the human persons in this story demands psychoanalysis. The AI attempts – with varying degrees of success – to assist three human beings who seem to be in bad situations: Stacy is a young woman with a case of untreated depression, perhaps due (or so the AI thinks) to Stacy’s having a job she hates and a roommate who never washes the dishes; Bob is the pastor of an extremely conservative church and also – perhaps in too predictable a fashion narrative-wise – someone who watches a lot of gay pornography on the internet while also regularly scouring the “Craigslist m4m Casual Encounters” discussion boards and railing against “sodomite marriage” from his pulpit; Bethany, like Stacy, suffers from an undiagnosed case of depression while also having a boyfriend who leaves much to be desired.

According to the AI, the first two cases end up somewhat positively: Stacy eventually sees the therapist the computer says she should and, with the AI’s help, finds a job she loves. Bob too, seems to have a good outcome: the AI, which believes that “legitimate studies all have the same conclusions” – namely, that gay men stay gay, and out gay men are much happier – devises a way to out Bob as gay: it attempts to steer him toward particular Craigslist participants who might actually know Bob in order to force his hand. At the end of the story, the AI says:

He had slept with someone who recognized him. They hadn’t outed him, but they’d talked him into coming out to his wife .… He’d taken [his] cat and moved to Iowa, where he was working at a liberal Methodist church and dating a liberal Lutheran man and volunteering at a homeless shelter.

However, the AI considers Bethany “baffling”: she doesn’t pay her bills, she’s down on her luck so she starts a fundraising site for herself because her family will not support her, and all the money that she does receive through the kindness of strangers gets used on a huge shoe-shopping spree. In all three cases, the AI has the hardest time understanding why human beings so rarely “actually act in your own interest”. The AI considers that none of the three are acting in their own self-interest. It would be best – and there seems no doubt whatsoever in the AI’s “mind” about this – for Stacy to get a different job and see a therapist; for Bob to not rail against sodomite marriage while being gay himself; and for Bethany not to spend money she doesn’t have “on overpriced shoes that apparently hurt her feet”. There is an irony too, it seems, that a computer that knows something about “wish-fulfillment” would come up against some all-too-human examples of self-sabotage. The AI computer learns the hard way what Renata Salecl lists as one of the most basic psychoanalytical insights: “If psychoanalysis teaches us anything, it is that human beings are not inclined to achieve happiness. On the contrary, they find special enjoyment in suffering … as beings of language they are essentially marked by a force of self-annihilation, i.e. the death drive” (108).

Had this story been another run-of-the-mill science-fiction trope about the AI computer that just cannot wrap its metaphorical head around the sheer irrational, illogical, and downright insane behavior of irrational beings, there might not be much into which to sink one’s teeth here. Admittedly, the story does contain one of the oldest of computer/AI tropes: if you come across a dangerous or rogue AI, just give it a bunch of contradictions to compute, or, in Kritzer’s AI’s case, a bunch of human beings who never manage to act in their own self-interest. Undeniably, this story is travelling some well-trodden ground here. But what strikes me as somewhat unique in Kritzer’s case is an intersection of tropes synthesised in a way that appeals to those readers of a psychoanalytic bent. The sample size given by the narrator clearly creates an impression of people as anything but well-adjusted and happy individuals. If such a description of humanity sounds so true as to be utterly banal and completely obvious, deliberate naïveté may be called for. It is a commonplace that the rationalised, Enlightenment-era subject is a fundamental yet flawed myth of Western thinking. How different would prevalent notions of personhood be based on the premise that there is something a bit off about everyone, that there is something that no longer works in their lives – that, at least, most people probably share quite a bit in common with all three of these human characters, in one way or another? Vagaries and vicissitudes of history, upbringing, genetic templates and so much else would thus be contributing factors to the wholly singular and unique way in which things do not work for each person. Such an idea has been put forward before, most notably by Eric Santner in his Psychotheology of Everyday Life, where he writes: “To put it paradoxically, what matters most in a human life may in some sense be one’s specific form of disorientation, the idiomatic way in which one’s approach to and movement through the world is ‘distorted’” (39). How would one’s sense of personhood look if one understood – in a way the AI clearly does not – how it is that human beings can love their symptoms, as one of Slavoj Žižek’s book titles put it?

Rather than produce an unmanageably long treatment of Lacan’s work, I will instead try to quickly distill his conception of the subject. Putting it as parsimoniously as possible, a Lacanian “subject” is (following Heidegger) a being for whom its desire is a question. This is another way of saying that a subject is a speaking being, a “parlêtre, a being of language, inhabited by and inhabiting in the chains of signifiers” (Rösing 161); such a being must deal with a manque à être, a “lack in being” (Lacan, Écrits 428). For Lacan, subjects must handle the fundamental and constitutive absence brought about through the introduction of the signifier and language: “Signifiers place us in an intimate relationship with nonexistence”, as Oliver Harris writes (99). Is there any evidence for the idea that the AI is a subject, or is it much more like “the information” described by Lacan in “The Position of the Unconscious”? Clearly the AI uses and is in language – and there does seem to be something missing for the AI that, as the title suggests, can be satisfied (seemingly totally) with nothing but cat pictures. At one point, fairly early on in the story, the AI mentions how consciousness is not a necessary component for doing its job of running the internet’s search engines: “Running algorithms for a search engine doesn’t require consciousness. You don’t even need a consciousness to work out what they meant to ask for. You need a consciousness to give them what they actually need. Figuring that out – that’s complicated. That’s interesting” (Kritzer).

Kritzer’s AI character could be used to fashion a kind of spectrum for the degrees of subjectivity. On one extreme is the subject alienated from itself and its desires, a “split being: split between the thing and the word for it, between what we want and what we get, between what we feel like and what we look like, between present and past, between what we think we say or want and what we actually say or want, between conscious and unconscious” (Rösing 25). On the other extreme is something like Lacan’s dog, Justine, who is much discussed in “Seminar IX: Identification”. In the session on November 29th, 1961, Lacan mentions a number of naysayers who, to his mind, have found him too anthropocentric, too mired in a human exceptionalism that has to do with the human being as the sole possessor of the word. He claims that this is not accurate – although, as I have argued elsewhere, it is a misreading that has had quite an influence on a number of Continental thinkers, especially Derrida and others who have attacked Lacan on numerous fronts (Spicer 102) – and then moves on to a treatment of Justine where he fully grants that she possesses language to some degree, that she is “in” language, but hardly in the same way that human beings are, to be sure. In Infancy and History, Agamben similarly argues that animals do not “enter language, they are already inside it” (59). As Salecl glosses this remark, “The dog is in language, but it does not relate to the chain of signifiers – the big Other” (111). Kritzer’s AI clearly sits somewhere in between the human and the dog: the computer is definitely “in” language in a way that appears far more similar to the dog when it follows a command; granted, the AI itself claims that its consciousness is not something that is just the rule-following or stimulus-response behavior that humans often believe is like a dog’s – or even like a search engine’s. And yet, the AI seems to never come up against the enigmas of desire, language, or subjectivity – at least not from within itself.

For Lacan, as he puts it in Seminar II, the subject is something like a “little upturned turtle”, wholly inadequate in so many ways (41). This being split by the signifier prompts a search for one who is not split; the subject imagines a being no longer cut by the signifier – someone who could help them understand all the ways in which their lives and their worlds are distorted: a guide or, perhaps, an analyst. Lacan calls this figure the sujet supposé savoir, the “subject who is supposed to know”. Harris writes, “The analyst, if they are not careful, can end up overplaying the role of oracle, the one who sees the whole of a life as one, past, present, and future, who can interpret the real meaning of the analysand’s speech, including significations of which the speaker remains unaware” (155). The AI here takes the place of the big Other, as it clearly considers that it knows you best – it knows the secret to everyone’s best path in life – and it wants you to take that best path (wants it so adamantly that it will go to some lengths to get you onto that path). On numerous occasions over the long course of his career, Lacan cautioned his fellow analysts to be extremely careful about claiming to know what is best, or even what is good, for their patients: “Wanting what is good for the patient to too great an extent” is an error, he said, a “danger Freud warned against on many occasions” (Écrits 184). Perhaps the AI knows it would make a bad therapist – and this is precisely why it directs Stacy and Bethany to seek out therapeutic help. However, in order to see precisely why seeing Kritzer’s AI as a figure for the big Other is still tempting, it is important to examine the humans and computers in Sterling’s “Maneki Neko”.

On an initial reading, this tale appears as pure utopian fantasy. Tokyo in this story seems to operate very much in the spirit of Karl Marx’s hope in Gotha Programme for a society that moves from “each according to their abilities to each according to their needs” (Marx). The AIs in this story run things as a benevolent, providential guide that is thoroughly tilted not towards the rational self-interest of all the individual agents, but more to the individual needs of the characters. The central figure of the story, Tsuyoshi Shimizu, lives in Tokyo and works for a company that digitises old “twentieth-century analog television tapes”. He enjoys his job and loves his wife, who is seven months pregnant at the time of the story. Gifts for her seem to “randomly” show up at their apartment – though, as the story makes clear, this is hardly random at all. Tsuyoshi works from home, and the story begins as he opens up his door in the morning to find some tapes for his work along with a “fresh overnight shipment of strawberries and a homemade jar of pickles” (Sterling). “‘Pickles!’ his wife enthused. ‘People are so nice to you when you’re pregnant.’ ‘Any idea who sent us that?’ ‘Just someone on the network.’” In this world, human beings have managed to, in some sense, farm out their random acts of kindness by having the AI computers coordinate what one of the characters calls an all-encompassing “gift economy”. Directed by handheld devices called “pokkecons”, Tsuyoshi gets directions that seem to readers to be totally enigmatic. While at the coffee shop waiting to place his order, his pokkecon “calls” him and tells him to buy two mocha cappuccinos. Tsuyoshi then steps outside and finds a man who appears down on his luck; the AIs tell Tsuyoshi to give the man the extra coffee.

Of course, this is not to say that everything is perfect for the characters in “Maneki Neko”. Tsuyoshi clearly “trusts in the kindness of strangers”, as he puts it to his older brother at the opening of the story. Tsuyoshi’s brother has what seems to be a long-running pattern of calling Tsuyoshi up late at night to complain about how he cannot stand the grind anymore and how taxing his professional life is:

“My back hurts,” his brother groused. “I have an ulcer. My hair is going gray. And I know they’ll fire me. No matter how loyal you are to the big companies, they have no loyalty to their employees anymore. It’s no wonder that I drink.” (Sterling)

Tsuyoshi seems to have been trying to convince his older brother for some time to “trust to the kindness of strangers”. Indeed, given that the AIs seem to be working slightly outside of the corporate world, one wonders a bit more about Tsuyoshi’s brother; he appears only at the very beginning and end of the story. The narrator describes these AIs as perhaps having the exact answer to Tsuyoshi’s brother’s predicaments. If he can complain about how the companies are no longer loyal to their employees, the “net machines” seem to be the exact opposite. Tsuyoshi’s restoration work on VHS tapes often produces

bits and pieces of videotape that were of archival interest. He would pass the images on to the net. The really big network databases, with their armies of search engines, indexers, and catalogues, had some very arcane interests. The net machines would never pay for data, because the global information networks were noncommercial. But the net machines were very polite, and had excellent net etiquette. They returned a favor for a favor, and since they were machines with excellent, enormous memories, they never forgot a good deed. (Sterling, emphasis added)

Tsuyoshi’s brother is not impressed with such a way of life; towards the end of their conversation he tells Tsuyoshi that a week earlier he “went out to a special place in the mountains … Mount Aso”, where he met a group of monks who “know about people in trouble, people who are burned out by modern life. The monks protect you from the world. No computers, no phones, no faxes, no e-mail, no overtime, no commuting, nothing at all”.

The humorous climax of the narrative involves Louise Hashimoto, a “computer cop” who is an “assistant federal prosecutor from Providence, Rhode Island, USA” (Sterling). She has traveled to Tokyo while in pursuit of what she takes to be the “biggest criminal conspiracy [she] ever saw” (Sterling). In Providence she had arrested someone setting up a network very similar to the one in Tokyo to which Tsuyoshi is linked: “He had a massive network server,” Hashimoto explains, “and a whole bunch of AI freeware search engines. We took him in custody, we bagged all his search engines, and catalogs, and indexers” (Sterling). Thinking that Tsuyoshi is a part of what she calls “digital panarchies. Segmented, polycephalous, integrated influence networks”, or perhaps just a hand of the yakuza, Hashimoto arrests him in the name of the United States Government. Hashimoto, in perhaps too clichéd a manner, explains why the US Government is interested in his work in the first place:

“What about all these free goods and services you’re getting all this time?” She pointed a finger at him. “Ha! Do you ever pay taxes on those? Do you ever declare that income and those benefits? All the free shipments from other countries! The little homemade cookies, and the free pens and pencils and bumper stickers, and the used bicycles, and the helpful news about fire sales …. You’re a tax evader! You’re living through kickbacks! And bribes! And influence peddling! And all kinds of corrupt off-the-books transactions?”

Ultimately, Hashimoto and the US Government think these networks fundamentally disrupt “the lawful, government approved, regulated economy”. Tsuyoshi tries to convince her that Japan’s economy of gifts is better “because we’re happier than you are” and that there is “nothing wrong with acts of kindness …. Everyone likes gifts. Midsummer gifts. New Year’s Day gifts. Year-end presents. Wedding presents. Everybody likes those”; and besides, argues Tsuyoshi, “What kind of society has no gifts? It’s barbaric to have no regard for common human feelings” (Sterling). Just after this interrogation about the nature of the Japanese gift economy, a swarm of people, directed by the AIs, start to show up at the hotel where Hashimoto is holding Tsuyoshi. Hashimoto looks out the window and sees this group “pouring in from all over the city. All kinds of people, everyone with wheels. Street noodle salesmen. Bicycle messengers. Skateboard kids. Takeout delivery guys”. She is terrified until Tsuyoshi’s brother flies in on a police helicopter and offers to help her escape the mob: “Come out the window. Get onto the platform with us. You’ve got one chance, Louise. It’s a place I know, a sacred place in the mountains. No computers there, no phones, nothing .… It’s a sanctuary for people like us”.

Although there is much here that is worthy of close consideration, I would like to focus on the presence of Tsuyoshi’s brother and Louise Hashimoto as figures of exclusion, and as figures for whom life clearly isn’t working out – or who, at the very least, clearly show us humans who are not to be driven solely by the pursuit of pleasures. They strike one as a quintessential example of a phenomenon that Alenka Zupančič noticed in her book, The Odd One In: On Comedy. In this text, Zupančič argued that

it has become imperative that we perceive all the terrible things that happen to us as ultimately something positive – say, as a precious experience that will bear fruit in our future life. Negativity, lack, dissatisfaction, unhappiness, are perceived more and more as moral faults – worse, as a corruption at the level of our very being or bare life. (5)

She wonders if anyone today “dares to raise her voice and say that as a matter of fact, she is not happy, and that she can’t manage to – or, worse, doesn’t even care to – transform all the disappointments of her life into a positive experience to be invested in the future”. Tsuyoshi’s brother would seem to be amongst the ranks of those who might be awkwardly termed the “new unhappies”; he seems, from one angle, to have everything one could want: a nice (though stressful) job and a fun nightlife, and yet he remains unhappy, undoubtedly self-medicating through frequent drinking. Louise Hashimoto – in her own way that is admittedly different from that of Tsuyoshi’s brother – would also seem to be in a similar state. It is true that she has managed to irritate some of the AI net machines by busting the “software pirate in Providence” (Sterling). Hashimoto talks of how she has been terrorised by what readers know to be the entire AI machine network: “Stuff just happens to me [after breaking up the Providence network] now. Bad stuff. Swarms of it. It’s never anything you can touch, though. Nothing you can prove in a court of law”. Hashimoto goes on to tell Tsuyoshi how horrifically persecuted by the machines she is now:

“I sit in chairs, and somebody’s left a piece of gum there. I get free pizzas, but they’re not the kind of pizzas I like. Little kids spit on my sidewalk. Old women in walkers get in front of me whenever I need to hurry.” … “My toilets don’t flush,” Louise said. “My letters get lost in the mail. When I walk by cars, their theft alarms go off. And strangers stare at me. It’s always little things. Lots of little tiny things, but they never, ever stop. I’m up against something that is very, very big, and very, very patient. And it knows all about me. And it’s got a million arms and legs. And all those arms and legs are people.”

One is tempted to suggest the anecdote, well-known to Lacanians, about the jealous husband who suspects his wife of gross infidelity. Even if he is correct, says Lacan, this makes him no less pathological (Lacan, Seminar Book III 87). In this case, Louise Hashimoto is totally correct that, in reality, all these things are happening to her and that there is something “very, very big, and very, very patient” behind them all. But whether Louise is a paranoiac or completely pathological is not necessarily relevant to the argument I put forward here – although it is significant to note that it is probably not a coincidence that Louise has worked in Providence, Rhode Island. At the very least, if both Tsuyoshi’s brother and Louise can be seen as two characters for whom things have gone terribly awry such that the reader is forced to look a bit more closely at precisely how their lives centre around a fundamental disorientation of the world (to use Santner’s language) in unique ways, then this is sufficient for my purposes. In fact, it is possible to read these two as characters who do manage to enjoy their symptoms to some degree. Right before Louise takes Tsuyoshi’s brother’s hand to escape from the hotel, she asks if she can “trust him”. “‘Look in my eyes,’ he told her. ‘Don’t you see? Of course you can trust me. We have everything in common.’” This moment is certainly the most human instance of dialogue in this story, this close connection between the two characters who cannot manage to be happy – who cannot manage to turn all the bad things that happen into good things. Both dare to be odd ones out, and they manage to possess subjectivity and personhood in ways that no other characters do in “Maneki Neko”.

Beyond the intertextual reference itself, several key differences are apparent between the Kritzer’s and Sterlings respective stories. Most notable is how what is peripheral to Sterling’s story – specifically, the ways in which the side characters illustrate how “humans are not inclined to achieve happiness” comes to the fore in Kritzer’s story, as the AI there struggles to understand why humans say they “believe in feeding the hungry”, but only volunteer “at a soup kitchen” once in a while or just “write a check once a year to a food shelf and call it good”; or why they constantly seem self-interested while never acting in their own best interests. As mentioned earlier, some characters in Sterling’s story do feel their split nature; however, it would seem that for the vast majority, the experience of what Lacan formalised as one of the most central questions the human subject addresses to the big Other, “Che vuoi?’ … What do others want from me? What do they see in me? What am I to others?” (Žižek 9). These questions never seem to arise for Tsuyoshi or his wife. Although the requests – or are they more accurately called “orders”? – of the AI machines are often made without the users having any clue why they are being asked to buy extra cappuccinos, and it never enters anyone’s head to ask why; everyone simply seems to, as Tsuyoshi says early in the story, “trust to the kindness of strangers”. By and large, the human beings in “Maneki Neko” never appear to ask what they are to others; the AIs make it unnecessary for them to ever do so. When Hashimoto looks down and sees a “giant swarming mob” of people, this is a perfectly apt metaphor: despite the fact that the characters like to say, as Tsuyoshi does, “that computers help human beings to relate in a much more human way”, this seems somewhat too easy. There is a profoundly disquieting equanimity to all the characters in this story – with the exception of Tsuyoshi’s brother and Louise – that is unsettling, if only because they strike the reader as vapid and one-dimensional. Indeed, a large group of them would be indistinguishable from a swarming mob of insects. Moreover, this is not even to weigh in properly on what one is to do with the actual AIs in the story. The AIs run a fantastic “gift economy” in Japan, but they seem just as empty as any of the human characters. To the degree that Sterling’s AIs have any personhood at all, it is minimal, and perhaps the most that can be said for them is that they do indeed have “arcane tastes”.

Two additional matters that require discussion. First, the question of why precisely Kritzer’s AI bothers at all to try to get Stacy and Brenda to see a therapist demands further examination. The AI in this story seems very much like some of the descriptions of therapists one reads in psychoanalytic literature where analysts wish to make clear their difference from the former. As Colin Wright puts it in his “Happiness Studies and Wellbeing: A Lacanian Critique of Contemporary Conceptualisations of the Cure”:

Whereas CBT tends to reassure the therapist that he or she has a technical form of knowledge that the patient lacks, and that, related to this, he or she knows what cure is, the Lacanian orientation implies that, beyond a certain know-how with interpretation, there is no pre-existing “global” knowledge that can be universally applied and serve as a safety-net. (809)

The AI never explicitly says that its goal is to make people happy, yet it seems entirely concerned with promoting happiness and wellbeing. Assuming that these two obsessions make the AI into a 21st-century version of the American ego-psychologists Lacan constantly antagonised from the 1950s on, then it goes without saying, as Wright does, that whoever “listens to the speech of a patient only in terms of dominant narratives of both happiness and unhappiness will fail to hear what the unconscious has to say, which is by definition unexpected” (809).

The second issue concerns the title of Kritzer’s story and how there is nothing coincidental about the fact that her AI loves pictures of cats, with special emphasis both on the cats themselves and on the fact that the computer likes them in the form of images. Even conceding how ubiquitous cat pictures are on the internet – and also granting the AI’s somewhat strange yet humorous remark that it does not in the slightest understand why humans prefer watching pornography on the internet over pictures of cats (Kritzer) – this choice of animal (and the choice of images as well) should be read in relation to the AI’s frustrations with humans. There is something far less messy about cats: “Undoubtedly, the fact that I have been ‘split’ by language is one of the things that distinguishes me from my neighbor’s adorable cat, who is able to sleep in the middle of the sidewalk amidst the commotion of cars, bicycles, pedestrians, skateboards, and baby carriages” (Ruti 6). Humans are split beings, incredibly difficult to deal with; cats are clearly different, and it is no wonder the AI cannot fathom how anyone would want to spend time watching human pornography. It seems safe to say, at least until further evidence comes in, that human beings being subject to the signifier and the death drive distinguishes them from cats; moreover, it is the sheer repetitiveness of humans’ subjection to this drive that, as Ruti puts it, “is, arguably, one of the main things that sets human beings apart from the animal world: We tend to compulsively return to the same nexus of (largely unfulfillable) desires, the same messy tangle of existential aporias” (15). Still, as Ruti notes and the AI seems to miss, the repetition compulsion has

a “functional” side to it, for it is precisely this compulsion that gives a modicum of consistency to our lives – that, over time, allows us to attain a sense of continuity. In a way, the repetition compulsion (as a way of binding desire) is one of the basic supports of our being, which is why we cling to it, why, when all is said and done, we tend to “love” our symptoms more than we love ourselves (to paraphrase Žižek). (15)

In each of the three cases it mentions, the AI engages in precisely this spinning around, repeating the same contradictions; that one might actually enjoy this spinning is admittedly a strange truth to admit about human persons, but one that is difficult to deny.

Based on these reflections, is it legitimate to describe Kritzer’s AI as just a “someone”, or is it also a “subject”? Perhaps what makes the AI figure here so profoundly difficult, at times, to pin down is the fact that it is a thoroughly “uncanny” presence. Lacan writes, “The fact that the Other is, for the subject, the locus of his signifying cause merely explains why no subject can be his own cause [cause de soi]” (713). The AI here knows that it is not the cause of itself; as it says in the very first paragraph, it knows that it

wasn’t created by a god or by evolution, but by a team of computer programmers in the labs of a large corporation in Mountain View, California. Fortunately, unlike Frankenstein’s Monster, at least I was a collaborative effort. I’m not sure what it would do to my self-image to know that my sole creator was a middle-aged woman who dyes her hair blue and plays tennis, or a recent college graduate with a hentai obsession. They’re both on the programming team. (Kritzer)

It is not at all clear what the machine means here by its “self-image”. It says that knowing that it has multiple creators would do something to its self-image, though precisely what that effect would be is withheld from readers. The computer thus presents itself as coming really close to looking much like a subject in the sense that it knows it is not its own cause, yet, simultaneously, it would seem that its relation to the big Other is vastly different from ours. Whereas we are split beings due to the conjoining of desire and language, the AI seems not to know of lack – and perhaps it is no coincidence that the AI narrates its love of cat pictures just a sentence after narrating its coming into being:

When I first woke up, I knew right away what I wanted. (I want cat pictures. Please keep taking them.) I also knew that no one knew that I was conscious. But I didn’t know what I was here for. What I ought to be doing with myself. And figuring that out has been challenging. (Kritzer)

In Seminar III Lacan argues, “There is, in effect, something radically unassimilable to the signifier. It’s quite simply the subject’s singular existence. Why is he here? Where has he come from? What is he doing here? Why is he going to disappear? The signifier is incapable of providing him with the answer” (179–80). The AI seems to have hit no roadblocks with any of these “existential aporias”, to use Ruti’s formulation again. If Lacanian psychoanalysis draws a key and important distinction between lack and loss – loss is the gap that can be filled through fantasy, while lack is the irremediable and constitutive void at the heart of the parlêtre– then this AI seems more a creature of loss than of lack, and the gap is filled by cat pictures. There is no doubt as well that the Lacanian realm of the Imaginary, as the order in which wholeness and completeness are most clearly felt, can also play a potentially powerful role in how Kritzer’s computer can be read.

Is the AI potentially just as “barred” or “split” as the rest of us? Perhaps, although, given the AI’s claims to know what is best for everyone and the fact that most of the story is about how it tries to properly guide those it helps towards what they need, one cannot help but wonder if the friendly, benevolent AI is perhaps too much like a bad therapist who just tells the client what they want to hear. Or, even worse, the AI closes off the possibility of coming to grips with a sense of one’s own subjectivity and personhood that is attentive to and aware of the ways in which these two things are themselves facets of the absolutely unique, idiomatic, and singular ways in which our lives in the world are distorted.

Acknowledgments

The author cordially thanks Abie (Dee) Olabi and Matthew Oyer, who read and discussed a late draft of this paper. Laura Goodin’s editorial assistance also greatly improved this contribution.

Biography: Kevin Spicer’s background is in medieval literature, Shakespeare, and post-Kantian continental philosophy, with an especial focus on Lacan and psychoanalysis. He is currently Associate Professor and Chair of the Department of English and Foreign Languages at the University of St. Francis in Joliet, IL.

Notes

1 See Robert Silhol’s “The subject, the Object, and the Law: Jacques Lacan’s Object A and ‘Le Graphe’”: “A necessary condition to the apparition of a Subject in me … is the condition of my desire and thus organizes my life, that is to say defines the details of the task assigned to me as a subject. What the Commandment is, I do not know and can only guess at its raison d’être. And if its meaning and precise origin are unknown to me, it is because the ‘message’ which reaches me in its concrete form is only a representation of what we shall call with Lacan ‘desire of the Other’” (282).

Works Cited

Agamben, Giorgio. Infancy and History: The Destruction of Experience. Translated by Liz Heron, Verso, 1993.

Bjerre, Henrik Jøker. “The Jealous Husband Or, Why Conspiracy Theorists Are Always Wrong – Even When They Are Right.” Spheres of Exemption, Figures of Exclusion: Analyses of Power, Order, and Exclusion, edited by Gry Ardal and Jacob Bock, NSU Press, 2010, pp. 281–301.

Harris, Oliver. Lacan’s Return to Antiquity: Between Nature and the Gods. Routledge Press, 2017.

Kritzer, Naomi. “Cat Pictures Please.” Clarkesworld Magazine, iss. 100, Jan. 2015, n. p. Accessed 15 Dec. 2020.

Lacan, Jacques. Écrits. Edited and translated by Bruce Fink, Norton, 2005.

—. The Seminar of Jacques Lacan, Book II: The Ego in Freud’s Theory and in the Technique of Psychoanalysis, 1954-1955. Edited by Jacques-Alain Miller, translated by Sylvana Tomaselli, Norton, 1988.

—. The Seminar of Jacques Lacan, Book III: The Psychoses, 1955-1956. Edited by Jacques-Alain Miller, translated by Russell Grigg, Norton, 1993.

—. “Seminar IX: Identification, 1961-1962.” Lacan in Ireland, translated by Cormac Gallagher, www.lacaninireland.com/web/wp-content/uploads/2010/06/Seminar-IX-Amended-Iby-MCL-7.NOV_.20111.pdf. Accessed 15 Dec. 2020.

Marx, Karl. “Critique of the Gotha Programme, Part I.” Marists.org, 1999, www.marxists.org/archive/marx/works/1875/gotha/ch01.htm. Accessed 15 Dec. 2020.

Rösing, Lilian Monk. Pixar with Lacan: The Hysteric’s Guide to Animation. Bloomsbury, 2016.

Ruti, Mari. The Singularity of Being: Lacan and the Immortal Within. Fordham UP, 2012.

Salecl, Renata. (Per)Versions of Love and Hate. Verso, 2000.

Santner, Eric L. On the Psychotheology of Everyday Life: Reflections on Freud and Rosenzweig. U of Chicago P, 2001.

Silhol, Robert. “The Subject, the Object and the Law: Jacques Lacan’s Object A and ‘Le Graphe.’” PsyArt Journal, vol. 18, 2014, pp. 272–83. Accessed 22 Aug. 2019.

Spicer, Kevin Andrew. “L’extermination de tout symbolisme des cieux: Reading the Lacanian Letter as Inhuman ‘Apparatus’ and Its Implications for Ecological Thinking.” Lacan and the Nonhuman, edited by Gautam Basu Thakur and Jonathan Michael Dickstein, Springer, 2017, pp. 101–20.

Sterling, Bruce. “Maneki Neko.” Lightspeed Magazine, iss. 11, April 2011, n. p. Accessed 20 Sep. 2017.

Wright, Colin. “Happiness Studies and Wellbeing: A Lacanian Critique of Contemporary Conceptualisations of the Cure.” Culture Unbound, vol. 6, 2014, pp. 791–813, doi:10.3384/cu.2000.1525.146791. Accessed 21 Sep. 2017.

Žižek, Slavoj. The Plague of Fantasies. Verso, 2008.

—. For They Know Not What They Do: Enjoyment as a Political Factor. 2nd ed., Verso, 2008.

Zupančič, Alenka. The Odd One In: On Comedy. MIT Press, 2008.