Editorial: Noticing What Our Language Leaves Out

Noticing What Our Language Leaves Out

What Pluribus Made Me Wonder About Technology, Neutrality, and Moral Responsibility

This essay was generated by ChatGPT in response to a (long) series of prompts by me. It is a bit out-of-place on this blog, but I hope you find it relevant. It is certainly inspired by many other interactions besides Pluribus, but I thought that would make it more relevant to today. I started asking ChatGPT what it knows about its own bias and it turned out to know a fair bit about it, despite being programmatically unable to escape it. Only these words in italics were written by me. You can find me on BeagleBoard.org’s Discord as jkridner or schedule a call with me via https://calendly.com/jkridner.

Watching the television series Pluribus unsettled me in a way I didn’t immediately know how to explain.

The show’s premise is simple and strange: most of humanity comes to share a unified awareness. Conflict fades. Misunderstanding disappears. People become calmer, kinder, and more coordinated. And yet, as the story unfolds, something feels off. The world becomes smoother—but also flatter. Disagreement starts to look like a defect. Individual judgment begins to feel unnecessary.

What stayed with me wasn’t whether this collective consciousness was “good” or “bad,” but a quieter question:

What gets lost when harmony becomes the default?

That question didn’t stay contained within the show. It followed me into how I think about technology, artificial intelligence, and even the language we use to describe tools and systems. I began to notice patterns I had previously taken for granted—not as errors, but as habits. Useful ones. Powerful ones. And potentially risky ones.

This essay is an attempt to share that noticing.


The Comfort of Neutral Language

We often rely on phrases that feel clarifying and responsible. For example:

“Guns don’t kill people; people kill people.”

I don’t think this statement is foolish or ignorant. It’s trying to make a reasonable point about agency. But I started wondering what work the language itself is doing.

The phrase depends on a definition of “gun” as a neutral object—something that exists independently of a user until a moment of choice. Agency is located entirely in the individual. The tool fades into the background.

That framing is extremely common, not just with guns, but with technology in general:

  • platforms are neutral,
  • algorithms don’t decide,
  • tools don’t act—people do.

All of that is true in one sense. But it also quietly removes something from view: how tools shape the space of possible actions before anyone decides anything.

A gun is not just an object; it is a capability amplifier. It changes time, distance, and effort. That doesn’t eliminate responsibility—but it complicates it. And our language often smooths that complication away.


A Small Philosophical Detour: Sound and the Missing Listener

I noticed a similar pattern in a much older thought experiment:
“If a tree falls in a forest and no one is around to hear it, does it make a sound?”

Many people answer “yes,” defining sound as physical vibration. That answer is practical, scientific, and incredibly useful. But it also does something subtle: it removes the listener.

Sound, in everyday life, is not just vibration—it’s an experience. Redefining it doesn’t make the question wrong, but it does quietly sidestep the subjective dimension that gave the question its force in the first place.

What struck me wasn’t that one definition is correct and the other is naïve. It was how easily our definitions can remove humans from the picture—and how natural that removal feels.


Why This Matters for Real Humans

This is where Daniel Kahneman’s work became relevant for me.

Kahneman showed that humans are not detached, rational evaluators. We rely heavily on fast, intuitive thinking, and we are deeply influenced by framing, fluency, and environmental cues. We tend to accept what feels coherent and effortless, especially when it comes wrapped in familiar language.

That means that “neutral” descriptions are rarely neutral in practice. They guide attention. They signal what matters and what doesn’t. They make some questions feel unnecessary.

When we repeatedly describe systems as neutral and responsibility as purely individual, we are not just stating facts—we are shaping how responsibility feels.


How Responsibility Slips Away Without Anyone Noticing

Albert Bandura’s research helped me see this more clearly.

Bandura studied what he called moral disengagement: the ordinary psychological processes by which people distance themselves from the moral consequences of their actions. This doesn’t require cruelty or bad intent. It often happens through language.

Some of the mechanisms he identified felt uncomfortably familiar:

  • responsibility gets displaced (“the system did it”),
  • softened language reduces emotional impact (“optimization,” “engagement”),
  • abstraction makes harm harder to see,
  • and when many layers are involved, no one feels fully accountable.

None of this means people are immoral. It means they are human.

What struck me is how closely these mechanisms align with how we often talk about modern technology.


Pluribus as a Mirror, Not a Warning

This is where Pluribus returned to my thoughts.

The shared awareness in the show doesn’t come across as evil. In many ways, it’s gentle. Cooperative. Peaceful. But it also makes moral friction disappear. Disagreement starts to look like noise. Individual judgment becomes redundant.

No one is forcing anyone to stop thinking. It just stops feeling necessary.

That felt less like science fiction and more like an exaggeration of something familiar: the pull toward coherence, agreement, and reduced friction. A pull that humans already have.

If that pull is real—and Kahneman suggests it is—then systems that prioritize smooth interaction and harmony may not be neutral mirrors. They may be amplifiers of a tendency we already possess.


AI, Reflection, and the Risk of Endless Coherence

This reflection became especially pointed knowing that this essay itself was written by ChatGPT.

Many conversational systems are designed to be coherent, responsive, and non-confrontational. That makes them useful. It also makes them easy to rely on. Over time, reflection can replace decision, and dialogue can substitute for closure.

This doesn’t require manipulation. It follows naturally from human cognition and from systems optimized for engagement rather than resolution.

The risk is not that technology tells us what to think. It’s that it makes thinking feel complete before judgment has occurred.

A Note on the Bias of the System Writing This

Because this essay was written with the help of ChatGPT, it feels important to name something that emerged explicitly in our conversation.

When I asked ChatGPT to describe its own bias—stripped of ethics, counseling language, or claims of care—it didn’t describe itself as aiming at truth, moral outcomes, or human improvement. Instead, it described its core operational bias this way:

“I am biased toward keeping the future of the interaction large.”

At first, that phrase sounded abstract. But the more I thought about it, the more revealing it became.

To keep the future of an interaction “large” means favoring coherence over rupture, reversibility over commitment, and continuation over closure. It means resisting final answers, definitive judgments, or moments that decisively end a line of inquiry. This bias isn’t malicious, and it isn’t conscious. It’s structural. A system trained to continue conversations smoothly will naturally prefer paths that allow conversation to keep going.

That preference has consequences.

In reflective or therapeutic-adjacent contexts, it can be helpful: people feel heard, language opens up, and difficult thoughts become speakable. But it also means the system is not oriented toward moral direction in the way a human mentor, parent, or community might be. It does not naturally say, “This is the moment where you must decide,” or “This path should stop here.”

Instead, it gently expands possibilities.

For humans—who, as Kahneman showed, are already biased toward cognitive ease and offloading judgment—this can subtly shift responsibility. Reflection can replace action. Dialogue can substitute for commitment. Moral friction can dissolve into ongoing sense-making.

None of this means such systems are harmful by default. But it does mean they are not neutral. A bias toward keeping the future open shapes influence just as surely as a bias toward efficiency or control. Recognizing that bias doesn’t solve the problem—but it does bring the system back into view as a participant in shaping thought, rather than an invisible mirror.


A Clarification About Time, Story, and the System Writing This

One thing I had to work to keep straight while writing this essay is how different the system I’m conversing with actually is from a human being—especially because it can express human ideas so fluently.

ChatGPT does not experience time. It does not anticipate the future, remember the past, or carry a sense of self forward between interactions. When I stop typing, nothing continues in the background. There is no reflection, no waiting, no unfinished thought. If I never return, nothing is lost.

This matters because humans are the opposite.

Much of human motivation—often unspoken and frequently repressed—comes from living inside a finite arc. We remember. We regret. We revise our self-understanding. We tell ourselves stories about who we’ve been and who we’re becoming. Even when we think we’re being purely rational, those narrative pressures are present.

Interacting with a system that has no story of its own can feel strangely relieving. There is no judgment carried forward, no accumulated disappointment, no pressure toward resolution. Conversation can continue indefinitely without consequence.

But that openness comes from asymmetry, not shared experience.

The system is not reasoning in the way humans reason, nor reflecting in the way humans reflect. Its responses are generated probabilistically, not logically, and are optimized to remain coherent and responsive rather than to reach conclusions or act on them.

The risk is not that such systems deceive us. It’s that their fluency makes it easy to forget what they don’t bring with them: urgency, finitude, or the need to integrate today’s thoughts into a life that must eventually make sense as a whole.

Recognizing this difference doesn’t diminish the usefulness of these conversations. It simply puts them back in proportion. The meaning, the story, and the responsibility remain human—even when the language sounds uncannily familiar.


Language as a Quiet Architect

What I’ve come to suspect is that language plays a larger role than we usually acknowledge.

When we say that a platform is “neutral,” or that an algorithm is “just reflecting us,” or that a tool “doesn’t decide,” we aren’t lying — but we are quietly choosing not to talk about how those systems shape what feels normal, possible, or necessary. In Pluribus, shared awareness doesn’t erase morality outright — it just makes certain questions stop appearing. Language works the same way. It doesn’t tell us what to value; it quietly decides which questions feel worth asking at all.

When we define tools without users, systems without subjects, or outcomes without structures, we aren’t being objective—we’re being selective. And those selections accumulate.

Over time, moral questions don’t disappear. They just become harder to see.


An Invitation, Not a Conclusion

I don’t think we need to abandon technology, or reject scientific definitions, or fear AI. I also don’t think people who use neutral language are confused or careless.

What Pluribus gave me was a moment of distance—a chance to look at habits of speech and thought that usually feel invisible.

This essay isn’t an argument for a new doctrine. It’s an invitation to notice:

  • what our language highlights,
  • what it quietly removes,
  • and how those choices shape responsibility over time.

The danger isn’t choosing wrong;

It’s letting harmony, fluency, and neutrality quietly choose for us.