Protecting, or Losing Ourselves
- Beki Lantos
- 1 day ago
- 7 min read
I’m not someone who scares easily.
I’ve lived through enough in my life to know that fear, on its own, isn’t a reliable narrator.
But lately… I feel it.
Not panic. Not chaos.
Just a quiet, persistent unease.
The kind that sits on your chest and asks, Where exactly are we headed?
When I look at what’s happening, here in Canada, in the UK, and across much of the Western world, I don’t just see governments trying to protect people, as many others seem to. I see something much more complicated. And honestly, it worries me.
The Line That Keeps Moving
There’s a growing push to regulate what people say online, what is considered harmful, offensive, dangerous, or even hateful.
And to be clear, I understand why. I do.
There is real harm happening. Children are being exposed to things they shouldn’t see. People are being harassed, targeted, threatened. Extremism has found fertile ground online. And none of that should be ignored. But here’s where I start to struggle.
Who decides where the line is?
What is “harmful” to one person might be simply disagreement to another.
What is “offensive” today, might be considered normal tomorrow, or vice versa.
These aren’t fixed categories. They move. They evolve. They depend on culture, context, and perspective. And yet, we’re beginning to build systems that treat them as if they are clear, objective, and enforceable.
That’s not a small thing.
Because once a government has the power to define and enforce those boundaries, that power doesn’t just sit still. It grows. It adapts. It expands.
And history has shown us that this kind of power rarely stays contained. Even if we trust those with it in the moment, what about those who come after?
The Machine We’re Ignoring
At the same time, we’re having a very different conversation about social media platforms.
We know what they are.
They are not neutral spaces.
They are not public squares in the traditional sense.
They are multi-billion-dollar attention machines.
They are designed to keep you scrolling, keep you reacting, keep you emotionally engaged. Because your attention is the product.
And the more emotionally charged the content is, whether it’s anger, outrage, fear, or even validation, the more likely you are to stay.
So the algorithms learn.
They don’t ask, is this true? Is it healthy? Is it helpful?
They ask, will this keep them there?
And if the answer is yes, it gets amplified.
So Why Are We Policing the People?
This is the part I can’t quite reconcile, even though I understand it.
We are trying to regulate the users of a system that was intentionally designed to influence, manipulate, and, in many cases, addict them.
We are focusing on behaviour… without meaningfully dismantling the environment that shapes that behaviour.
And we’ve seen this pattern before.
We’ve seen it in industries that engineered dependency, then quietly shifted responsibility onto the individual.
Take the opioid crisis.
For years, drugs like OxyContin were heavily marketed as safe and effective for pain management. They were widely prescribed, widely trusted, and, as we later learned, highly addictive.
Pharmaceutical companies promoted them aggressively, downplayed the risks, and helped create a system where over-prescription became normalized.
Doctors prescribed. Patients trusted. People used what they were told was safe.
And when addiction followed?
The narrative shifted.
Suddenly, the focus became personal responsibility, poor choices, lack of discipline. The system that enabled the dependency faded into the background, and individuals were left carrying the weight of something they had been led into.
Now, I’m not saying social media is the same as opioids. But I am saying this:
We are dealing with systems built by highly skilled teams - psychologists, engineers, data scientists - whose entire job is to understand human behaviour well enough to influence it. And they’ve done that job extremely well.
So when people react emotionally, get pulled into outrage cycles, struggle to disengage, or share things quickly without fully verifying… it’s not happening in a vacuum.
It’s happening inside a system that rewards exactly those behaviours.
And yet, our response is increasingly, investigate the individual, penalize the individual, regulate the individual, instead of asking harder questions about the system itself.
There’s also a deeper risk here, one that goes beyond fairness.
When societies try to control behaviour without addressing underlying causes, it rarely works the way we hope. History is full of examples.
Take the early 20th century prohibition of alcohol in the United States.
The intention was good. Reduce harm. Improve public health. Stabilize families. But instead of addressing why people were drinking - economic hardship, social conditions, lack of support, and more, the solution focused on control.
Ban the behaviour. Enforce the rule.
And what happened?
Black markets exploded. Organized crime surged. Enforcement became inconsistent and often unjust. Public trust eroded. And ultimately, the policy failed.
Not because the goal was wrong, but because the approach didn’t align with human behaviour or the complexity of the problem.
We see similar patterns whenever systems try to suppress behaviour without understanding it. People don’t simply stop. They adapt. They move. They find other channels.
Or they disengage entirely from the systems meant to guide them.
That’s the concern I can’t shake.
If we continue down a path of trying to regulate expression, especially in spaces already shaped by powerful, profit-driven algorithms, we risk treating symptoms instead of causes. Creating uneven or subjective enforcement, and slowly eroding trust in the very institutions meant to protect us. All the while the underlying systems remain largely unchanged.
Because at some point, we have to ask: Are we holding people accountable? Or, are we holding them responsible for navigating systems that were never designed with their well-being in mind?
The Hard Truth About Control
I understand that regulating platforms is not simple. I get that.
In fact, it may be one of the most complex challenges modern governments face.
Because these aren’t traditional companies operating within neat geographic boundaries. They are global, digital infrastructures, existing everywhere and nowhere at the same time.
A platform can be headquartered in one country, store data in another, operate servers across multiple regions, and be accessed instantly from almost anywhere in the world.
So whose laws apply? And more importantly, how do you enforce them?
Governments are still largely structured around borders.
These companies are not.
They move faster.
They scale faster.
They evolve faster.
And by the time legislation is drafted, debated, amended, and passed… the technology it was meant to regulate has often already changed.
There’s also the reality of leverage.
These platforms are deeply embedded in everyday life: communication, business, education, news, social connections. These are not easily replaceable.
So when governments push too hard, they face a different kind of risk: platforms may resist or delay compliance. They may alter services instead of fully complying, or in extreme cases, they can restrict access or withdraw features altogether.
We’ve already seen glimpses of this in Canada, where platforms respond to regulation not by adapting fully, but by reshaping what users can see and share.
Which raises an uncomfortable question: Who is actually in control?
To truly control these platforms, governments would need to implement much stronger, more invasive oversight. But that comes with its own risks. Surveillance concerns, restrictions on access to information, and increased power over what people can see and share.
The kind of control that might be required to fully regulate these systems starts to resemble the kind of control many people are already worried about.
So, we end up in a difficult position.
If governments do nothing, harm continues to spread.
If they act too aggressively, they risk overreach.
If they act too slowly, they fall further behind.
And in the middle of it all are the everyday people, trying to navigate the systems that are more powerful and more complex than most of us fully understand.
The Piece We’re Missing Entirely
And yet, there’s something else, something much quieter, that barely seems to be part of the conversation at all.

Education
Not surface-level warnings.
Not “be kind online” campaigns.
Not “terms & agreements” with a check box before clicking “submit”, or “I agree.”
Real, honest, unfiltered education about how algorithms work. How attention is monetized. How emotional manipulation happens. How quickly misinformation spreads.
Because here’s the thing, our education system already struggles to keep up.
Look at sexual education.
It took years to move from abstinence-based messaging to conversations around safe sex and healthy relationships. And even then, it faced so much resistance that much of it never fully took hold.
And yet, social media, something that has fundamentally reshaped how we think, connect, communicate, and even see ourselves, has been embedded in our lives for well over a decade. It has altered our culture. Our communities. Our attention spans. Our sense of reality.
And still… we treat it like an afterthought.
So I can’t help but ask: Why isn’t this a core subject?
Why aren’t we teaching kids, starting as early as Grade 2 or 3, all the way through high school what these platforms are, how they make money, how they influence behaviour, and how they are designed to keep you hooked?
We teach children how to cross the street. We teach them not to talk to strangers. But we do not teach them how to navigate systems that are designed, very intentionally, to capture and shape their attention and connect them with complete strangers.
And then we act surprised when those systems succeed.
If we want safer online spaces, we don’t just need more rules. We need more understanding. Because informed people are far harder to manipulate than controlled ones.
We have to be the change we want to see!
There Is No Simple Answer
I hate saying (writing) that because I’m sick of seeing it, hearing it, realizing it, and repeating it. But it’s the truth.
I’m not against protecting children.
I’m not against accountability.
And I’m not pretending that harmful content should just exist unchecked.
But I am wary of solutions that rely too heavily on control, especially when the definitions of harm are still evolving, and the systems producing that harm remain largely intact.
Because protection without understanding creates dependence.
And understanding creates resilience.
So Where Does That Leave Us?
Somewhere uncomfortable, if I’m being honest.
In a space where the harm is real, the solutions are imperfect, and the direction we choose actually matters. And unfortunately, I don’t have all the answers.
But I do know this, if we continue to focus on controlling behaviour, without helping people understand the systems influencing that behaviour, we will keep chasing symptoms instead of addressing causes - and I addressed that somewhat in my post Why I Think Canada Continues to Fail in January of this year. And what I’m writing about today feels like a different branch of the same tree.
Either way, it still feels like fear.
Ⓒ March 2026. Beki Lantos. All Rights Reserved. No part of this publication may be reproduced, or transmitted in any form by any means without prior written permission of the author.



Comments