Digital Assistants and the Loss of the Social - Jonathan S Carter
On Superbowl Sunday, millions of people across the U.S. sat down to watch one of the greatest spectacles in American entertainment. Some also watched a football game. Certainly, the range of Superbowl commercials provided a range of interesting topics and controversies. While many advertisements went for laughs, candidates also made their 2020 pitches, electric vehicles got oversized, and we were promised drinking beer could help save the small farms and the environment. However, the advertisement that generated the most buzz on my social media was Google’s “Lorretta” advertisement. The ad featured the story (based on real events) of an older man using the Google digital assistant to save, organize, and add details to his memories to strengthen his mnemonic connection to his dead wife.
While most of my social media was abuzz about the feels that this ad inspired, I was struck by a more hesitant response. Right after the ad a friend on social media posted, “[the ad] made me cry and fear for our technological isolation at the same time.” This optimism, and fear, represents the larger stresses that popular culture has taken with our technological revolutions. While technology companies have long promised that their assorted innovations will make our lives better, technoskeptics have been warning of the danger of new media since at least Plato’s admonishment of writing in the Phaedrus. Recognizing the valid concerns of each side, the philosopher of technology Bernard Stiegler calls us to attend to the pharmacological potential of all technologies. Yes, new technologies can be tools of social control, but they can also be used to foster new ways of existing in the world.
Google’s “Lorretta” advertisement represents a larger trend of focusing on technology as a way to improve social conditions. Amazon argued that Alexa was better than human assistance, Facebook touted the value of groups, and (just after the game) Nintendo argued that the Switch brings families together. Combined, these ads represent the pharmacological range of new technologies and their relation to the social. Before I explore this range, I will briefly articulate the relationship between technology, memory, and identity. I conclude with a call for us to say no(iesis) to the vision of digital assistants offered during the Superbowl.
Memory technology and the making of ourselves
When considering questions of identity and technology, it is useful to turn to the works of Bernard Stiegler. Arguing that western philosophy went wrong with Plato’s technophobia, Stiegler argues for a turn towards technics—which he defines as “organized inorganic matter”—to understand how individual and collective identity manifest as an effect of our interactions with technology. Specifically, he contends that all technology are loci of retention. As an individual uses any technic, a memory of its socially prescribed use is retained within the technic. Subsequent uses of the technic by others not only interpellate the user into a collective identity based on shared usage but also shapes that individual’s identity going forward. For example, let us take the technic Digital Doxa—an easy example because its explicitly communicative focus means the stored memories are more explicit. As I came to write this blog, my style was dictated by the existing posts and their norms—the memories retained in the infrastructure of the blog. Moreover, by engaging in the use of the blog genre, I also become a blogger, entering a new collective identity. This creates a sense of a collective “we,” as the blog asks the writers and readers to come together around the memories shared in relation with the technic. However, these entries are not static. I may gain some new sense of myself through the writing process: writing this post changes my personal biography, I may involve other Digital Doxa bloggers and thus intervene in the construction of our ongoing collective identity, and, by veering from convention, I may alter the norms of the blog, changing its identity as a technic going forward. For Stiegler, this moving nature of tripartite identities (individual, collective, and technics) is termed transindividuation.
Since identities authorize the capacity of individuals and collectives to act, transindividuation is at the heart of the political. In the example of Digital Doxa, as the norms of the blog are concretized, they proscribe—intentionally or not—the form of future posts. While a measure of this stabilization is needed to concretize the sharing of memory, if a technical system becomes too rigid, it produces automated prescription that replace thought over potential action.For example, Stiegler has repeatedly warned that the autofill function of search engines such as Google encourages specific lines of inquiry when searching. While this may return the algorithmically determined “best results,” it forecloses on the new ways of thinking that might be encountered by odd search queries, accidental browsing, or leafing through content.Therefore, in a move akin to the role of dissoi logoi in rhetorical theory, Stiegler argues that technologies that allow for competing interpretations and identities allow for robust and open politics. It is the clash of ideas, serendipitous finds, and frustrated generation of new search terms that makes the card catalog a more open—if less efficient—way of finding information. Given that both of the Superbowl advertisements for digital assistants rely on the easy storage and quick recall of information as a selling point, Stiegler’s ideas about technics and memory are useful to evaluate the vision of human identity prescribed and proscribed by digital technics.
Erasing the we
The most prominent theme across the promotion of digital technologies during the Superbowl (and in general) is their ability to improve the conditions of human life. However, Amazon’s ad for their Alexa digital assistant argued that this improvement came directly at the cost of relations with others.
The ad opens with Portia de Rossi and Ellen DeGeneres preparing to leave (what we are to assume) is their house. Ellen then asks Alexa to adjust the thermostat on the way out the door. She then posits “What do you think people did before Alexa.” The viewers are then treated to approximately a minute of various short scenes of humans with names similar to Alexa failing to provide the same quality of services as the AI assistant. When asked to adjust the temperature, Alessa throws a log out of the fire, Al provides his friend with mediocre jug music, the Alexi provides mediocre gossip, etc. The ad closes with Ellen asking Alexa to play her favorite song, and we are (literally) Ushered out to the song “Yeah” (with a clay jug backbeat)—demonstrating the superior responsiveness of the AI assistant.
Across these humorous vignettes, people in positions of authority or engaged in action are disappointed by the humans they are engaged with, implicitly positioning the artificial intelligence of digital assistant’s as all the help one needs in being more informed, enjoying work, sending communication, etc. Tweeting about the commercial, Tressie McMillian Cotton placed the ad within the long history of dehumanization related to this type of labor noting that “low key these Alexa commercials (was did people do before Alexa?!) basically harken back to slave societies. the answer is enslaved people did it.” While the bracketing of de Rossi and DeGeneres’ romantic relationship does imply that humans have some value, many of the features of friendship, such as conversation and mutual entertainment, are transferred to the digital assistant. We are never shown an explicit case of a relational task that humans are better at.
Aside from reducing human interaction to its most instrumental form, this ad also has troubling implications for the very desirability of human to human sociality. At its heart, this removal of human assistance removes the “we” from the process of transindividuation. There is no reason for the technology to be considered with a we of users who converse and share ideas for its use, for Alexa will always provide a better experience other humans can. Stiegler warns that such modes of technical relation are troubling not only because they preclude politics (how can we act without a notion of “we”) but also because the I—which is an understanding of the self in relation to an I—is reduced to a “one”—an identity shaped only by the programed interactions of technical devices.Rather than sharing competing jokes or songs and debating their value, the important news, good music, and funny jokes are all given to us via Alexa without alternative options. There can be no individuation, no shared memories between coworkers and friends, only singular best practices offered by Alexa.
Similar to Amazon’s ad, Google asked us to consider the ways that a digital AI can supplement some of the less instrumental facets of human relations. Set to light and nostalgic feeling piano music, the ad features a series of “hey google” requests made by an older man asking the AI to remember details about his presumably deceased wife. Voiced by the man who inspired the ad, the remembrances consist of a series of small moments (snorting while laughing, favorite movies, trips taken together) that all constipate to offer a memory of the love in their relationship. In allowing a widower to find joy, rather than sorrow, in negotiating the absence of Lorreta, the ad evoked strong emotional reactions.
Beyond this evocative affective frame, the ad also demonstrates a more complicated relationship between individuals and AI. Positively, it gives the narrator control over his memories, allowing an externalization that not only helps him focus his memories into a cohesive remembrance. Google searches related to these prompts are then shown as ways to supplement these memories with additional details (maps of trips, clips of films, etc). Google not only serves as a locus for externalized memory, but it also adds to it, making it more detailed and robust. This vision is far more open to individuations than the Alexa ad because while Google may be framed as mnemonically superior to human memory, the direction of Google is shaped by the complicated and possibly conflicting memories of the narrator.
However, much like the Alexa, Google offers a singular relationship between the user and the technic as a mediator of memories. The memories retained by the google AI are neither shared with family nor generated through conversational reembraces. The narrator’s interactions with the AI are framed as enough to recreate the value of the relationship with Loretta. Thus, while the ad does imply that a certain level of human interaction is needed to establish memories that make up an identity, once generated, Google can replace those people—preserving and curating memories for lone people without the need for collective engagement. Users are again reduced to a one. We need not worry about the collective relationships, social connections, or even the daily lives of older people. As long as they are plugged into Google, they return to the past without having bearing on the present. Within the frame offered by the ad, no longer is memory a tool for shaping a collective future, but rather a resource to make sure we only stay engaged in the past.
Say no(eisis) to digital assistants
Thus, even as the Loretta ad helped millions of Americans feel something tender in the middle of a spectacle of capitalist excess and masculine brutality, these ads ultimately join forces in offering digital AI as technics that will allow for the perfection of human memory. Alexa will better remember and perform the tasks we need to know and Google will allow us to remember the past so well that we need not engage in the future.
Yet as Stiegler notes, our technological future is not intrinsically bad. All technologies re pharmaka (medicines and poisons). Alexa could as easily offer a user viewpoints to be debated, and Google could create communities that come together to curate share and complicate memories. It is not that the technologies themselves are bad, but the vision of them prescribing a vision of perfect memory that is technically derived, rather than collective memory that is open to negotiation and debate, should give us pause. Stiegler terms this valued openness noeisis. And a third ad (aired just after the Superbowl) gestures to a more noetic relationship with technology.
Airing just after the game, Nintendo offered an ad for the Switch that showed the game system as a point of contact between a father and daughter. She calls him at work to ask about a game, and later when he comes home, they play together. However, the game is not offered as the only feature of the relationship. Instead, just after the father takes the controller, they try to ask their kid about their day. Here, the value of the Nintendo is not the prescription of a particular memory, but rather as the focal point of a shared identity. Because they can come together over the Nintendo, they can be a collective that then can move and grow in other discursive ways.
Certainly, the ad still offers consumer behavior as a road to better social relations, and the Switch isn’t as complicated as the AI assistants (and thus less likely to be able to present itself as an alternative to social relations). Yet, in offering a technics as part of a larger social interaction, one that negotiates multiple layers of interaction and types of memory, rather than an option to perfect a single users’ interaction, this commercial for the Switch demonstrates a vision of how we can engage technology in a more productive and meaningful way. Rather than replacing imperfect human action with digital assistants, we need technics that bring us together to create spaces to explore the imperfections that make collectivities interesting and meaningful.
 Bernard Stiegler, The Age of Disruption Technology and Madness on Computational Capitalism (Malden: Polity Press, 2019), 42. Bernard Stiegler, The Neganthropocene (London: Open Humanities Press, 2018), 42. Stiegler, Disruption, 45-46. Stiegler, Disruption, 50. Safiya Noble documents how autofill does more than simply restrict options, but that it also reinforces and increases the circulation of problematic depictions of race, gender, etc. Moreover Noble reminds us that while the algorithms (as technics) may promote this oppression, this is a result of the humans who coded these biases into the process. Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018), 1-2.  E.g. Stiegler, Disruption 32-33.  Stiegler, Disruption, 5.
Stiegler Disruption, 196