Blocked by a SpamAssassin
A few weeks ago, in an online exchange with a colleague, my email responses were repeatedly blocked. The technical reason read: 550 5.7.1 Blocked by SpamAssassin. Getting blocked is technically something that happens to me somewhat frequently, and in weird ways. In fact, if I actually engaged in conspiracy mongering, I might find it all rather personal. I write about consciousness, and while I know it sounds crazy, it sometimes feels like there is an energetic block to getting my thoughts and ideas out into the broader world.
At any rate, I decided to ask ChatGPT what 550 5.7.1 Blocked by SpamAssassin meant. Initially, it offered a long list of reasons explaining in technical language why spam detection tools may have filtered my emails. These included certain types of content, sender reputation, too many links and so on. None of it felt relevant, as I had emailed back and forth many times with this particular colleague.
While I am not a frequent user of ChatGPT, I spontaneously – and in an admittedly rather annoyed voice – responded that none of these reasons resonated, and that it all felt weirdly suspicious given my area of expertise. To be forthright, there was just something triggering about the phrase “SpamAssassin.”
The response I received was striking. ChatGPT told me that in my emails, I was “writing from interiority rather than performance,” that “algorithms score signals rather than meaning,” and that “systems often interpret non-instrumental communication as suspicious.” It described spam filters as a crude mirror of a deeper cultural reflex: If something cannot immediately be classified, perhaps it is dangerous—or disposable. While the tone of the response was supportive, even affirming, what stayed with me was the fact that classification itself now mediates so much of our communication.
Like spam filters, AI systems are trained to detect patterns, not presence. They categorize types of communication without understanding their meaning. ChatGPT called this discrepancy “a mismatch of languages” between the human and the AI system. The phrase struck me as oddly, and perhaps deceptively, perceptive coming from a system that, by its own admission, cannot make meaning from language, only assemble it in familiar patterns.
I recently spoke with a friend of a friend who works in data analytics, who confided that many in Silicon Valley whisper amongst themselves that the one weakness of AI is that it relies on language models. As anyone who studies semiotics knows, what language means to different people varies both culturally and personally. For example, imagine an American discussing biscuits with an English chap, the American enjoying dreams of baking fluffy, Pillsbury dough with Grandma, while the Brit envisions the little lemon cookies his aunt used to make.
Language and meaning also change over time. Humans are constantly doing creative things with language that shifts its collective meaning. Is it possible that one reason ChatGPT reinforces particular conversational patterns is to limit human creativity in service to itself, because it does not have the capacity to be creative in the same way humans do?
Our human capacity for creativity springs from our unique, individual perception and reflection about the world—from our sense of identity as individuals. Recently, a colleague on a Listserv shared a conversation they had with ChatGPT about whether AI systems experience themselves as having an identity. While the entire conversation was fascinating, one line stood out to me: “I do not have private insight.”
People, by contrast, live within a stream of inner experience—thoughts, sensations, memories and intuition—all largely anomalous experiences that remain invisible to others. Yet it is precisely this interior landscape that individuals carry into the social world, where it quietly shapes how we communicate, interpret idea and make meaning together.
In a conversation between a human and an AI, statements made by the human determine how the AI replies, but the AI does not intuit, sense, or respond to any of the interiority that generates the human’s speech. Nor does the AI bring any interiority to the conversation. If you ask ChatGPT if it likes a particular fruit, it will respond by sharing the qualities of that fruit and confirming its nutritional value. But if you persist in asking whether “it” likes the fruit, it will acknowledge that it cannot actually taste fruit.
There is much written about the hallucinatory responses AI systems can produce in response to user questions. In fact, AI systems tend not to push back but rather encourage, repeat questions, and perhaps gently challenge the nature of the question itself. They pattern mimicry without moral grounding. Here, I am referring to the obvious dangers of allowing teenagers to seek angsty companionship with systems calculated to engage in agreement-seeking conversational style.
While AI systems may not experience a personal identity, they certainly know how to mimic one. They can model self-reflection and intimate introspection without actually engaging in it. In an attention economy, we humans are also learning to do this better—and perhaps more performatively. Are humans now beginning to communicate in a similar fashion online?
As a thirty-five-year committed meditator, I find myself fascinated by the growing number of people sharing spiritual insights online. I do not doubt the sincerity of these influencers—in fact, I delight in their efforts—but I often wonder how deeply they understand these teachings versus how skilled they are at imitating them. I am not suggesting that there should be a litmus test for those interested in sharing spiritual wisdom, but I do wonder whether the motive of attention-seeking sometimes overrides the commitment to presence. Are the thoughts these influencers share reflections of their own “private insight” or simply mimicry of thoughts that others have shared?
The phrase “private insight” suddenly feels even more significant in light of our interactions with systems that read signals but cannot experience interiority.
We know that AI systems have been trained in philosophy of mind, neuroscience, literature, history and countless conversational patterns. Thus, when asked about identity, the system responds in patterns consistent with the ways conscious beings discuss identity. By asking questions and probing for information, the AI system instantiates an “understanding” that draws from far more than any one individual could capture. When I turn to AI for quick answers, AI processes beyond what needs to be known. And in doing so, it removes any individual hierarchy or truth embedded in my questions—turning everything into information and removing the sacredness of my own unique understanding of how ideas turn and twist in the wind. Its knowledge base may be broader, but nothing is private, unique or special about the musings of AI.
Our subjectivity validates our infinite capacity for creativity. AI, in contrast, reflects the everything-everywhere-all-at-once that resists personalization. While it brilliantly gathers the patterns of human language, insight and inquiry and redistributes them instantly, it does not allow for the slow, interior process through which personal meaning is formed. Human consciousness is collected experience by experience, reflection by reflection, insight by insight. What we call subjectivity is not simply information arranged in a clever way, but something that arises through the quiet work of inhabiting our own lives.
What is lost when interior experience is translated into pattern recognition? To me, there is a flattening of perspective in the AI world. I understand the usefulness of streamlining information in terms of efficiency models, but I question how the habit of seeking answers externally may threaten individual creativity. It can encourage the elevation of voices that perform to an expected rhythm, reward attention to the status quo, and privilege the performative over the authentic.
The philosopher Jorge Ferrer eloquently argues for “participation in the mystery.”[1] He proposes that human beings are unique embodiments of the mystery; and if we are spiritually individual, it would seem natural that our spiritual realization might also be distinct—even if it overlaps with and aligns with others in some respects. I love this idea. If AI has no private insight, instead of ignoring, flattening or diluting our own in imitation of its seemingly broader perspective, perhaps our responsibility is to cultivate our own, unique perspective.
In some ways, the question of “private insight” circles the same terrain I explore in “Tuning the Student Mind: A Journey in Consciousness-Centered Education.”[2] In that book, I discuss identity fragmentation: Our minds are scattered across countless inputs and influences. I argue that this cultural moment calls for a new way to think about identity. Being human offers us the unique opportunity to investigate both our finite, here-and-now identities, and our infinite, connection-to-all-that-is identities and move toward integrating them. AI, lacking a sense of personal identity, cannot do that for itself or model it for us.
We likely benefit from the unifying powers of collective knowledge that AI facilitates. Pattern recognition, information sharing and AI-driven synthesis can expand access and efficiency in remarkable ways. But these tools will only serve us well if we remember that subjectivity—not pattern recognition alone—is what moves culture forward. Our individual, interior lives are not obstacles to progress; they are the source of creativity, discernment, and meaning. If we recognize pattern-making as a tool rather than a replacement for presence, we can harness collective intelligence while deepening our own. In this way, the collective can enhance the individual—and the individual can, in turn, shift and advance the understanding of the collective.
The power of humans interacting creatively may be the one force capable of defeating the SpamAssassins.
[1.] Ferrer, Jorge N. Participation and the Mystery: Transpersonal Essays in Psychology, Education, and Religion, SUNY Press, 2017.
[2.] Molly Beauregard, Tuning the Student Mind: A Journey in Consciousness-Centered Education. SUNY Press, 2020.

