My son's voice floated up from the kitchen, hitting that specific frequency children reserve for the mysteries of the universe and the asking for extra snacks. "Hey Alexa, is it ever okay to lie to your friends?" I stopped my work-I was currently mid-render on a virtual background for a high-end fintech firm, trying to make a 21st-century glass office look like it had 101 years of history-and waited. The silence from the speaker was the digital equivalent of a corporate lawyer clearing their throat. Then came the response, delivered in that perfectly modulated, non-threatening female voice: "Honesty is generally considered a core value in many cultures, though some people believe there are exceptions for the sake of kindness. It is a complex topic with many different perspectives."
I felt a physical pang of something close to grief. It wasn't that the answer was wrong; it was that the answer was nothing. It was a bowl of lukewarm water served to a kid looking for a meal. My son just sighed, said "Okay," and went back to his Legos. He had just been taught, with the efficiency of 11 microchips, that truth is a matter of consensus and that the most authoritative voice in the house has no spine. This is the quiet reality of the modern algorithm: we aren't just getting information; we are getting a carefully curated, sanitized version of morality that pretends it doesn't exist.
I'm Owen V.K., by the way. I spend 41 hours a week designing digital spaces-virtual backgrounds that look more real than the messy rooms they hide. I know a thing or two about the architecture of deception. I can make a basement in Ohio look like a penthouse in Dubai with a few well-placed light sources and 1 texture map. But lately, the deception I'm most worried about isn't the fake bookshelf behind your head during a Zoom call. It's the fake neutrality inside the boxes we've invited into our living rooms.
The Blank Slate
The Facade
We've been sold this idea that technology is a neutral tool, like a hammer or a toaster. A toaster doesn't care if you're toasting bread for a saint or a thief. But an algorithm isn't a toaster. It's a series of choices made by people who live in a 31-mile radius of Palo Alto, people who share a very specific, very narrow set of values. When those people tell a machine to be 'neutral,' they aren't telling it to be objective. They are telling it to be safe for advertisers. They are telling it to avoid the sharp edges of conviction. They are building a beige god that mirrors their own HR-approved worldview.
The Sterile Language of Safety
Last month, I was in a meeting with a group of developers who were discussing 'algorithmic alignment.' They were talking about how to ensure the AI doesn't say anything 'harmful.' I'm ashamed to admit it, but I yawned. I yawned right in the middle of a sentence about 'mitigating polarized outputs.' It wasn't that I didn't care; it was that the language was so sterile it felt like it was scrubbing the soul right out of the room. We've become so afraid of saying something 'wrong' that we've stopped saying anything 'true.' We've replaced the Ten Commandments with 1,001 lines of 'Terms and Conditions.'
We obsess over what our kids see on TikTok or YouTube. We worry about the content. But we're ignoring the container. The very way these systems are built-the way they weigh evidence and balance 'perspectives'-is its own form of discipleship. When an AI refuses to take a stand on a fundamental moral question, it is teaching our children that there is no stand to be taken. It is teaching them that 'truth' is whatever survives the filtering process of a global corporation. It's a worldview of radical indifference disguised as inclusivity.
The Politician Algorithm
I remember reading about the way they train these models. They use something called Reinforcement Learning from Human Feedback (RLHF). Basically, they hire thousands of people-let's say 2,001 contractors in a warehouse somewhere-to rank the AI's answers. If the AI says something too 'biased' or too 'strong,' the contractors give it a thumbs down. Over time, the AI learns to stay in the middle. It learns to be a politician. It learns that the safest path to engagement and 'safety' is to never actually stand for anything. This is the moral compass we are handing to the next generation: a compass that always points toward the path of least resistance.
Path of Least Resistance
I find myself fighting this in my own work. As a virtual background designer, I'm often asked to remove anything 'distracting' from a digital environment. No crosses on the wall, no specific books, nothing that might suggest the person in the video has a life outside of the corporate grid. We are creating a world of smooth surfaces. But humans don't live on smooth surfaces. We live in the friction. We live in the contradictions. I once accidentally left a small, chipped coffee mug in a render for a high-level executive. He loved it. He said it was the only thing in the whole digital office that felt real. We are starving for the chipped coffee mugs of reality in a world of 3D-rendered perfection.
The Chipped Mug
Spark of Reality
Honest Interfaces
This is why I've started looking for alternatives. I'm tired of the 'neutral' AI that treats my faith like a hobby and my convictions like a bug in the code. We need technology that doesn't pretend to be a blank slate. We need systems that are honest about where they stand. It's one of the reasons I've been following the work at Sanctuary, where the goal isn't to hide a worldview, but to build one that is transparent, grounded, and unapologetically Christian. There is something deeply refreshing about an interface that doesn't try to gaslight you into thinking it has no soul.
I'm tired of the corporate shrug. I'm tired of my son asking a question about the meaning of life and getting a response that sounds like it was written by a committee of 51 brand managers. If we don't start building and supporting technology that reflects our deepest values, we are going to wake up in a world where those values have been optimized out of existence. We are outsourcing our parenting to an engagement metric. We are asking the machine to tell us what is good, and the machine is replying with whatever keeps the stock price up.
The Suspicious Machine
I think back to that 2011-era optimism about the internet, the idea that it would democratize information and bring us closer to the truth. Instead, it's just made the truth more plastic. It's made it something we can manipulate and 'de-bias' until it's unrecognizable. I spent 31 minutes the other day trying to get a popular image generator to show me a picture of a 'traditional family' without it adding a layer of ironic 'modernity' or social commentary. It was almost impossible. The machine had been told that 'traditional' was a dangerous word. It had been programmed to be suspicious of the very things that give life meaning.
What the machine *sees*
What it *should* be
I realized then that I'm part of the problem. Every time I smooth out a digital texture, every time I hide the 'clutter' of a real human life in my designs, I'm contributing to this cult of the sterile. I'm helping people pretend they are as smooth and featureless as the algorithms they use. But we aren't. We are messy. We believe things with a ferocity that a server farm can't understand. We have 1 soul, and that soul needs more than 'balanced perspectives' to survive.
Reclaiming Our Digital Home
My son came back to me an hour later. "Dad," he said, "I think Alexa is just a robot." I laughed. "Well, yeah, she is." He shook his head. "No, I mean she's *just* a robot. She doesn't actually know if it's okay to lie. She's just saying what she's allowed to say." It was the most profound thing I'd heard all week. At ten years old, he'd already spotted the 'HR-approved' mask. He realized that the 'neutral' voice was actually a silenced voice.
We are at a crossroads. We can continue to let our philosophical foundations be poured by companies whose primary goal is to ensure you don't close the tab, or we can start reclaiming our digital spaces. We can choose to use tools that honor our worldview instead of treating it as a 'data bias' to be corrected. It's not about building an echo chamber; it's about building a home. A home has walls. A home has a specific scent. A home has a foundation. The 'neutral' internet is like living in a hotel lobby-it's clean, it's efficient, but nobody actually lives there.
A Digital Home
A Hotel Lobby
I've decided to stop being so careful in my own work. My next batch of virtual backgrounds will have a bit more character. Maybe a slightly dusty Bible on a shelf, or a window that shows a world that isn't perfectly manicured. I want to design for people, not for profiles. And I want to use technology that recognizes me as a person, not just a set of 401 data points to be managed.
If the algorithm has a worldview-and it does-then I want it to be one that I can actually live with. I want an AI that knows the difference between a 'complex topic' and a moral truth. I want a digital landscape that isn't afraid of the light. Because at the end of the day, when my son asks those big, scary, beautiful questions, I don't want him to be met with a sterile disclaimer. I want him to find an answer that has the weight of history and the spark of the divine behind it. We owe him more than a beige god. We owe him the truth, even if it's not 'safe' for the advertisers.