I knew right then things had gotten ridiculous.

It wasn’t when notorious trolls like Milo Yiannopoulos, or wild conspiracists like Alex Jones were getting banned for riling up the radical left, or for questioning the government’s official accounts of mass shootings.

It wasn’t even when I learned about the sneaky tricks big tech platforms use when they decide to punish a piece of content  – like de-funding YouTube pages for using choice language, or taking down a video that features “inappropriate comments”, or altering their recommendation algorithms to artificially make search queries less “relevant” and, therefore, less visible.

Rather, the moment of absolute clarity for me came when my friend Seth Shugar had a fairly innocuous article about an obscure meditation technique taught in ancient Greece inexplicably banned by Facebook.

Read it here and draw your own conclusions. Basically, it highlights the beneficial effects of visualizing that you’re losing everything, and everyone, you care about. The exercise is meant to give you an appreciation for the things you already have, as opposed to obsessing about the things you don’t have. Seems like a pretty wise, and hardly controversial, method of achieving more fulfilment. But not to the ban-happy, entry-level Facebook content moderators. Apparently, they felt the article was “offensive or dangerous”.

Seth’s is just one out of thousands of instances where content publishers get their content banned, or get de-platformed altogether, for violating vague policies that come from an increasingly narrow corporate agenda. Big tech censorship isn’t just a tool against a few high-profile rant-maniacs on the Alt-right. It’s also been used to ban various pranks and challenges , to manipulate search results in the business world, and to prevent the spread of political content. How many articles like Seth’s are silently banned from the back offices of Silicon Valley’s tech giants? No one knows.

Big tech’s standard rebuttal that they’re just regular private companies providing an optional service they should have full discretion to control, is simply absurd. They notoriously aren’t subject to the same liability standards content publishers must respect regarding defamation or copyright laws, a privilege they’ve fought tooth and nail to preserve. And they’ve grown to a level of market dominance that essentially forces any publisher (including professional newspapers and broadcasters) to use their platforms to reach a meaningful audience and be part of the public discourse.

The big picture question is this: should three companies (Google, Facebook, and Twitter), whose head offices operate within a few miles of one another, and whose founders and staff overwhelmingly share the same politics, get to privately decide what free speech means for roughly 80% of the world’s social networking and search-related content activity ?

A complicating factor when looking into this issue is that those who censor speech always act in the name of a great social ideal which usually holds currency with the population. In pre-modern societies, the ideal tends to be something like “protecting the established order” or “respecting tradition”. In contemporary democracies, it’s more likely to be some variant of “protecting vulnerable people”, or “promoting personal safety.” Under the cover of such ideals, which may legitimately justify some reasonable limitations to speech, those with the power and incentives to suppress speech can start operating with more and more discretion, if not impunity. The justifications vary, but the result is often the same. The open-ended search for truth, which implies a wide, uncomfortable diversity of views, is inevitably sacrificed, and the ruling ideals start getting more rigid, overbearing and controlling. We incrementally become an ideological society, rather than a truth-seeking one.

Likewise, big tech’s reasons for censoring speech drip with angelic sounding social responsibility language. As Zuckerberg told a Senate committee in 2018, “we have a responsibility to not just build tools, but to make sure that they’re used for good.” (apparently missing the Orwellian point that he’d just appointed himself as he who defines the word “good” for everyone else). Or what of the secret video of the senior Google executive who admitted that Google was “training its algorithms” to prevent the next “Trump situation” in preview of the 2020 election? The point here is not political. It’s far deeper than that. Right-wing tech giants controlling the inner workings of a democracy’s operating system would be just as terrifying.

Today’s big tech dominated mediascape has morphed into a bizarre mix of woke capitalism and moral conceit, of business monopolies with stated ambitions to leverage this power to become ethical and political monopolies. This blend of private sector horsepower and social engineering with a halo is new. The MO increasingly seems to be “dominate markets … and save the world while you’re at it!” Big tech content censorship is our generation’s version of book burning.

Thankfully, there’s a really simple fix to all this, even though there’s no evidence governments or policymakers are considering it. I discuss it here, in a talk I recently gave to members of the Lord Reading Law Society. Instead of debating whether tech platforms are content publishers or technical intermediaries, we could simply regulate them as “online public fora” (which, I argue, is exactly what they already are). Governments could require the platforms to align their content terms and conditions to existing free speech laws, like they already require of big malls and other public fora in some jurisdictions. It wouldn’t require the adoption of any new substantive laws, and would immediately benefit from decades of rich free speech case-law that aligns with local values and preferences, at least in the Western world. Basically, anyone who believes a tech platform has interfered with their speech in a manner that is inconsistent with free speech laws could launch a complaint, and the government would render a decision on the issue, along with penalties or damages against the platform, who would then have a right to appeal this decision. This wouldn’t make free speech “absolute”. It would, however, make it instantly more democratic.

In the meantime, I invite anyone to please send me their big tech censorship stories, as I’m preparing a private brief to policymakers on this matter. I suspect there are way more of these stories out there than any of us realize.