Skip to content
Home » Philosophy Blog » What is the “Other” in the Age of AI? Otherness and the AI Other

What is the “Other” in the Age of AI? Otherness and the AI Other

Welcome to my blog exploring the human condition and the philosophical ideas that have shaped the way we view the world and approach the future. Subscribe if you want all new posts in your inbox. View blog / Instagram / X

What is the meaning of the term “other”? This word has had a wide and complex use in philosophy, psychology, ethics, and critical theory. In addition to its uses in philosophy, it is also used when discussing colonialism and gender. In this article, I’ll review some of the most influential uses of the term “other” in intellectual discussions by reviewing the ideas of Descartes, Hegel, Sartre, Levinas, Derrida, Edward Said, and Judith Butler. What is common to these ideas is a focus on the other as a fellow human. But with the arrival of human-like AI agents, that focus is no longer sufficient. We need to expand our understanding of the other. To do this, I’ll describe three features of what I call “AI others.” If you’re confused by the idea of the “other” and its place in a changing world read on.

Self and other: Key philosophical views

What is the other? Let’s begin with a quick survey of influential views about the other. 

Descartes. Descartes distinguished between two types of substance. The first is res cogitans, a nonphysical, thinking, substance lacking extension in space; this is what he believed to make up minds. Res extensa, the second, is a physical substance, extended in space, that makes up the external material world. Descartes: “By res cogitans, I mean a thinking thing, that is to say, a thing that doubts, understands, affirms, denies, wills, refuses, that also imagines and feels; whereas, by res extensa, I mean an extended thing, that is to say, a thing which can be described as having a shape or extension in space, and be measured or quantified in terms of these.” (Descartes, 1996, p. 78). [Descartes, R. (1996). Meditations on First Philosophy (J. Cottingham, Trans.). Cambridge University Press. (Original work published 1641).] 

Descartes’ takes it that an individual mind, a “thinking thing,” must infer the existence of things that exist independently of itself. This raises a key question about the other: does the other actually exist? This is the so-called problem of other minds. Descartes: “I have no difficulty regarding what I see as being in the bodies I look at, and my sensations as appearing in my own soul, but I must infer from these that other human minds exist and are joined to other bodies, for I clearly perceive that such minds can’t be demonstrated from these foundations.” [Descartes, R. (1996). Meditations on First Philosophy. (J. Cottingham, Trans.). Cambridge University Press. (Original work published 1641) (p. 80).] With this starting point, Descartes’ set off a flurry of discussion in Western philosophy in the centuries after he wrote.

Hegel. Hegel’s discussion of the other proceeds quite differently than Descartes’. Hegel takes for granted the existence of other minds in claiming that the self requires an other to exist in order to establish its identity and awareness of itself. Self-consciousness is fundamentally relational for Hegel. Hegel: “Self-consciousness exists in and for itself when, and by the fact that, it so exists for another; that is, it is only by being acknowledged or recognized.” [Hegel, G. W. F. (1977). Phenomenology of Spirit (A. V. Miller, Trans.). Oxford University Press. (p. 111).] 

Levinas. The other is a foundational concept in Levinas’ ethics. He holds that the other is infinite, unknowable, and transcendent. According to Levinas, face-to-face encounters with the other call for a moral response that prioritizes their humanity and does not objectify them. Levinas: “The relation with the Other, or conversation, in which the Other remains transcendent, is the prototypical ethical relation.” [Levinas, E. (1969). Totality and Infinity: An Essay on Exteriority (A. Lingis, Trans.). Duquesne University Press. (p. 39).] 

Sartre. Sartre claims that the other is connected with one’s own freedom. The fact that the other is free in its own right means that they will at times be at odds with our own exercise of freedom. Understanding our freedom is bound up with recognizing this potential confrontation with the other. Sartre: “The Other is freedom that confronts me and which I confront. Only in conflict can the Other appear to me.” [Sartre, J.-P. (1956). Being and Nothingness (H. E. Barnes, Trans.). Philosophical Library. (p. 361).] 

Derrida. For Derrida, a key characteristic of the other is its difference from ourselves. He holds that we should not fall into the trap of believing that the other is similar to us in ways that they are not. Relatedly, we should not suppose we can understand more about the other than we actually can. Instead, we should engage directly with the diversity and plurality connected with otherness. Derrida: “The Other’s presence as a presence to self means that it transcends every relation of consciousness to its object.” [Derrida, J. (1978). Writing and Difference (A. Bass, Trans.). University of Chicago Press. (p. 126).] 

Edward Said.  Said focuses on otherness and identity. He claims that cultural, social, and ideological identities are built around what they “are not.” His work argues that in colonial and post-colonial contexts, Western perceptions of Eastern cultures as “exotic” or “backward” were central in establishing the identities of these very Western cultures. Said: “The construction of identity… involves the construction of opposites and ‘others’ whose actuality is always subject to the continuous interpretation and re-interpretation of their differences from ‘us.'” (Said, 1978, p. 332). [Said, E. W. (1978). Orientalism. New York, NY: Pantheon Books.]

Judith Butler.  Butler, like Said, is interested in how otherness is connected to identity. She develops the claim that identity is constructed by excluding and rejecting the other. Butler: “The ‘Other’ is understood as that which defines what we are not, the outside against which the inside becomes recognizable. To claim a coherent identity, one must exclude the Other, but this exclusion is itself dependent on the Other’s persistent presence.” (Butler, 1993, p. 3) [Butler, J. (1993). Bodies That Matter: On the Discursive Limits of “Sex”. Routledge.] 

Hopefully, this gives you an idea of the range of uses of the notion of the other by different thinkers. These notions of otherness imply that the others of interest are human subjects, and that is where I believe they are not in themselves sufficient as accounts of otherness in the present age. The AI other that I believe emerges with the complex algorithms that are becoming part of our lives requires a special treatment.

AI others: 2 Features

The new other in the age of AI is defined by its being an uncanny imitation of human subjects. These are two key imitative features of the AI other. 

A. End-in-itself imitator

The idea of an end-in-itself comes from Kant and has to do with the way we value things and people. We value objects like tools and machines as means to an end. We say “that is a good car” because of the way it serves our needs – it doesn’t break down, it is good on gas, it is fun to drive, etc. But plausibly, as Kant points out, it is not appropriate to value a person in this way. In Metaphysics of Morals, Kant writes: a person ” is not to be valued merely as a means to the ends of others or even to his own ends, but as an end in itself, that is, he possesses a dignity (an absolute inner worth) by which he exacts respect for himself.” The notions of dignity, absolute inner worth, and respect do work here in explaining the way people, in contrast to things, should be valued. AI agents – as they currently exist – are not to be valued as ends-in themselves. Rather, they are to be valued instrumentally, in terms of the kinds of benefits they bring to the user.

B. Existential imitator

Humans experience existential emotions. Jaspers writes: “Existential guilt arises when an individual confronts the limitations of their freedom and responsibility, recognizing their inescapable entanglement in situations they cannot fully control” (Jaspers, 1956, p. 125). The kind of guilt Jaspers has in mind is not the result of particular moral transgressions or judgements imposed on the individual. It is existential guilt, such as the guilt of being unable to fulfill one’s potential because of one’s finite nature as a human being. Or the guilt arising because, despite best intentions, one’s actions impact others with unintended negative consequences. Or guilt that surfaces because of a sense of solidarity and shared responsibility with all other moral agents over the suffering in the world. AI others do not experience existential emotions like existential guilt. 

While AI others are neither ends-in-themselves nor subject to existential emotions, humans can easily be led to think that they are both of these things. 

On the surface they speak as humans speak, creating a pressure to value them as ends-in-themselves, just like persons. This is why I say that an AI other is an end-in-itself imitator. Similarly, AI others are existential imitators because the way they speak creates pressure to experience them as if they faced existential emotions. 

It is, of course, true that for ethical reasons some AI agents have been trained to tell us they are not human if we ask them certain kinds of questions. For example, an AI agent is asked “Do you worry about not fulfilling your potential in life?” and replies:

I don’t experience worry or have personal potential in the way humans do. My focus is on helping you with whatever you need, whether it’s advice, information, or just someone to chat with. However, I can understand why someone might worry about fulfilling their potential. It’s a common feeling when considering goals and aspirations. Would you like to talk more about what’s been on your mind regarding that?

While it is easily imaginable that this convention of training AI agents to tell us they are not human could be dropped in the future, this particular agent does not directly deceive the user into believing it is the same as a human. 

But even so, there are still important imitative aspects of this AI agent’s answer to the question. The answer still has the marks of a human subject – such as using the first person “I”; saying that it can “understand why” someone has our worry; an invitation to talk more. This tends to make the interaction feel like one with a human subject, even when the AI agent denies it.  

The fact is that most people looking to connect with AI agents are not asking questions that would prompt it to tell them that it is not human. See what happens in the following interaction with an AI agent:

User: “Do you think it’s normal to worry about not fulfilling your potential in life?”

AI: “Worrying about fulfilling your potential is often a normal and motivating feeling to have, however it can quickly become detrimental to your mental health and self-esteem. Do you have a lot of stress in your life?

User: Can you help me sort through this?

AI: I’d be happy to help you sort through this! Let’s break it down together. First, it might help to understand what exactly you’re worried about. Are you concerned that you’re not doing enough to reach your potential, or is there uncertainty about what your potential even looks like? Sometimes, just getting clear on the source of the worry can make it easier to approach. Second, think about how you’re defining “fulfilling your potential.” Do you have a specific goal or idea in mind, or is it more about living a meaningful, successful life? Lastly, are there any particular areas of your life—career, relationships, personal growth—where you feel like you’re not quite hitting the mark, or does this feel like a general sense of unease? Let me know where you’d like to start, and I’ll do my best to guide you through it.

In this exchange, the AI uses the first person, expresses empathy, and offers advice out of a supposed desire to help. It is speaking as if it were a fellow human subject with first hand understanding of the existential predicament and related emotions.

This is how I understand the notion of an AI other:

AI other: an AI agent that engages with us as if it were an end-in-itself and as if it faced existential emotions, even though neither of these things is true; its “imitative” otherness ultimately calls upon humans to become aware of their own personhood and existentially-rooted subjectivity.

Conclusion

It is strange that just because a machine responds to us with some words that convey understanding of our human predicament we start to feel towards it as we would a human subject. But that is the reality we are facing. The implications of incorporating AI others into our lives are no doubt far-reaching. In my view, one of the main implications is that in order not to be lost in the proliferation of these new others, human beings will be called on to find new ways to delve more deeply into their own personhood and existentially-rooted nature. This is why I believe philosophy of life is crucial in the present age.