if you think about the contemporary "woke", social-justice movements, and things like communism - even in their brutal stalinist form - it's easy forget that these are values of the Enlightement taken to their extreme utopian conclusion.
but there is an inherent problem (which Alexei Yurchak calls "Lefort's Paradox") which is not very hard to spot; if the ultimate goal of Enlightment values is the absolute liberation of society and the individual, then its implementation - which requires an authority that subjugates its population to the project - must create an authority that is somehow above, or external to its own ideals.
the liberation of society and the individual, creation of "the New Man" was indeed the goal of even stalinist communism. Except the implementation meant that society and individual came under absolute authoritarian control.
this is similar to contemporary liberal movements; the goal is liberation, but the implementation means controlling speech, controlling language, censoring political viewpoints, "canceling" and destroying anyone who disagrees with them, passing vaguely defined hate-speech laws, forcefully injecting the ideology into every part of culture, etc etc
the parallels are quite conspicuous
The problem with this parallel to me is that of extremes similar to what I was trying to say to Cog. Stalinist Russia was f%$king brutal. Up to 9 million people died from starvation alone. In comparison what we're talking about here is debate around whether social media posts are being censored under the guise of hate speech laws. When people make comparisons like this it causes a part of my brain to stop taking it seriously as these things aren't in the same ballpark and makes me feel like over-exaggeration is happening.
While I don't think it's as clear-cut as Froyd says, it's a lot closer to some sort of middle ground than say, "progressivism is an economically illiterate deathcult who believe in original sinlessness suppressed only by capitalism".
Societies don't just flip to fascist overnight, there's a granular adjustment year over year until the original intention is no longer recognisable. Hitler didn't start with deathcamps, he started . People are right to be concerned when governments exercise unilateral control over speech, even if well-intentioned at first. While we don't experience it as particularly menacing, legislation becomes precedent and precedent informs future laws which may be written by people with nefarious intent.
My contention with this line of thinking is that the information environment has changed and demands regulation, even if it's also correct to be concerned about it. People can literally purchase and manipulate information environments which has the potential to essentially dictate (hyper)reality for populations. This isn't speculative or a slippery slope, it's happening right now.
Silencing the people can be holding their mouth shut, but it can also be shouting down the real signal with a false one, and this is trivially easy to do through algorithm manipulation and bot farms. The result is the same, disenfranchised or manipulated people beholden to the interests of the powerful.
I think we should absolutely be terrified of governments creating vague legislation allowing them to control speech. But leaving speech entirely unregulated in the age of botfarms, deepfakes, and algorithmic manipulation is even more certain to lead to dystopia. We need good, clear, bipartisan regulation not to censor the hard R or to preserve pronouns, but to stop the literal end of truth and history. There needs to be some mechanism in place to stop billionaires and governments from absolute power over narrative. The free market won't work for this one.