Why does this matter? Because free speech, free expression in general, is a means to express values and beliefs. Our values and beliefs are integral to the way we each evaluate the content decisions of private parties like social platforms, publishers, and people.
No one values spam, so no one writes hand-wringing essays about whether it is ok to kick spam and spammers off social media. Most people recognize that porn, even if they like it, is not something that should be in everyone’s faces all the time, so there is little existential concern for free speech as a whole when porn is excluded or hidden from some contexts.
You cannot expect to make sense of free speech if you try to ignore other values.
It doesn't stop with porn, if you're on a social platform to connect with friends and all you see is political things you don't care about, that's noise too.
If a social media platform doesn't let you supress things you don't want to see, that's a problem with the platform.
If a social media platform doesn't let you supress speach you don't want others to hear, problem is with you.
If you're trying to muddle the waters of free speech with false narratives and strawman arguments like spam and porn, that's just cheap manipulation.
This has nothing to do with values, if someone doesn't want to see porn, he should be able to avoid it, and if someone wants to see porn he should be able to see it. Spam is by definition something nobody wants to see.
Only collectivists and authoritarians would start going off on "society" values as if their values are superior enough to warrant forcing.
And that's really what moderation is ultimately about. On every forum and platform: suppressing noise and increasing the signal-to-noise ratio.
> If you're trying to muddle the waters of free speech with false narratives and strawman arguments like spam and porn, that's just cheap manipulation.
How is that trying to muddy the waters? It's absolutely relevant. They're both great examples of speech that's completely legal and yet universally banned.
> Spam is by definition something nobody wants to see.
But then what's spam to one person might not be spam to another. There are always some people who do respond to spam.
Same thing with hate speech; most people don't want to see it, but some people disagree on what it is. And let's face it, hate speech is the thing that some people try to defend under the banner of free speech. And unlike spam, hate speech can pose a real danger. It can inspire people to commit violence against the group being hated (this has happened plenty of times). If left unchecked, it can create a culture in which violence against the hated group becomes acceptable.
All meaningful speech has consequences. Calls to violence definitely so. Is this really something that needs to be defended when spam and porn don't?
I'm not saying it definitely does or doesn't, but if you want to have a meaningful discussion about free speech, you have to be willing to go into specifics, and not just wave meaningless platitudes around.
So apparently here lies the gory details. There was a short interview article with one of fired Twitter engineer who claims to had been "basically responsible for messing up your timeline", and as a followup to the article, someone on Twitter who is believed to be that person defended that the end state that GP considers a worse state(so do everyone!) showed better retention through A/B testing.
They must have been doing what they think is good, backed by scientific methods and hard data, SNR ever improving in pandas. Yet everyone is in good agreement that Twitter had betrayed and sabotaged the user community, engaged in artificial manipulations on honest good people, accelerated divisions, radicalization, even promoting terrorism, worldwide.
I think we have elephants in the room, definition of SNR being one thing, or one of its legs. Something is off.
I think it's more of an acknowledgement that there's a price to freedom/liberty and there are fundamental trade-offs involved. I.e. can't have our cake and eat it too.
The difficulty is in finding where exactly the draw the line. For better or worse, currently there are many technically legal yet unsavory views allowed under "freedom of speech".
You've spelled out pretty well the dangers of free speech but if we err to much on the side of caution, that itself causes animosity and discord from (perceived or actual) loss of liberties. And of course, if left unchecked, it can create a culture which embraces more centralized concentration of power, inviting corruption and further concentration.
I'm still wrestling with my own thoughts on where to draw the line and how to balance freedom with safety.
All I'm saying is: we should be honest about these trade-offs, and not pretend there's such a thing as "absolute free speech". And I also think we'd better err on the side of allowing too much speech than on the side of too little. But at the same time, I also think events of the past couple of years warrant a reevaluation of those tradeoffs and there we draw those lines. And I don't think Twitter, Musk, or any other social media should be making those decisions for us, but they probably should be a voice in that discussion.
From a libertarian perspective private platforms should freely decide what they allow and what they won’t. The public can also freely criticise them and move away. Advertisers can freely decide if they want to advertise there.
Every libertarian should visit a true neo-nazi site like Stormfront to see how much we are allowed to say.
Porn is not "completely legal". There are many limits to porn within the law.
Inciting violence isn't protected speech. Hate speech isn't protected speech.
The first ammendment states explicitly that the government cannot create laws to restrict the human right to free speech and association. That does not mean every expression is legal. Comparing legal things and illegal things is muddy-ing the water
To me it's all pretty simple. The platform has its loyalty to the listener. It is up to the listener to decide whether he wants to engage with different kinds of speech, and most networks already can determine that, either by separation into forums and communities or just by general engagement metrics.
If you prevent exposure because the listener doesn't want to be exposed, that's OK.
If you prevent exposure because you don't want the listener to be exposed, that's censorship.
Censorship is suppressing information because it might be listened to, while spam filters and porn filters is suppressing information because it won't be listened to. The ethical and moral boundary is clear, and it is strictly the platform's loyalty to its consumers.
Anything else is inherently positioning the platform in an assumed moral superiority to its users, which given the recent Twitter revelations, is an incredibly bad assumption.
Moderation can be both good and bad. Good moderators are those that remove content nobody will want to see. Bad moderators remove content because people might see it.
There are plenty of calls to violence from the Ukraine side against Russia. None of those got censored. Should they?
I'm not defending the essence of the speech. It should be irrelevant. I'm defending what is a moral intent and what is an immoral intent of the platform. There's meaningful discussion and then there are constructing strawman and fabricated threats and vague ambiguous terms like hate speech.
If your method for deciding whether to censor foofoo is whether there exists a story where foofoo leads to bad outcome, and someone wants foofoo censored, they will create that story (potentially including real life actions). Therefore deciding whether to censor foofoo should be independent of the existence of those stories. They also tend to overgeneralize and use broad categories when the stories themselves are specific anecdotes.
To me the boundaries are pretty clear, and everything else is just people telling pretty meaningless stories, conflating terms, and having inconsistent standards.
Understanding where things stand morally is something you do after discussing them, not before.
Individuals can have values. I have disagreements with the parent comment , but free speech is certainly something that people value.
Disagreement tends to come in regarding what we each value, and the hope is that free speech is a shared one.
Everyone wants to censor what others see. Why is that?
I don't have a link now, but IIRC in last USA election the spam filter of gmail blocks the fundraising or "informational" email form the mayor political parties, and it generated a big controversy.
> “My parents, who have a Gmail account, aren’t getting my campaign emails,” Representative Greg Steube of Florida told Google CEO Sundar Pichai in July 2020, during a congressional hearing that was ostensibly about antitrust law. “My question is, why is this only happening to Republicans?”
My Gmail spam folder is very clear evidence it happens plenty to Democrats.
> In response, Google has launched a controversial program allowing campaign committees to effectively opt out of spam filters — a huge concession to mounting political pressure from Republicans. But Verge reporting shows the RNC has not taken advantage of the program and made few efforts to alter the core practices that might result in their emails being labeled as spam.
> A source familiar with the matter confirmed to The Verge that, nearly a month after the pilot’s launch, the RNC has not joined or even applied for the program, even as the party continues to mount political and possibly legal pressure against Google. The RNC did not respond to multiple requests for comment regarding the committee’s decision to abstain from the pilot program.
And who judges the relevancy?
I can tell you that this new account got absolutely flooded with porn suggestions, all Vietnamese language, a lot of it looking like borderline underage stuff.
I'm not anti-porn in any way but this was pretty awful. My business is a bakery, I did nothing to invite these suggestions except sign up. I did start to follow bakery and coffee related accounts and the suggestions got better after a few days but not entirely. Then for reasons about a month ago I decided that our business shouldn't use Twitter anyway.
Incidentally when I log into my personal Facebook account here, since I have personalized ads turned off I get shown local Vietnamese language ads in the sidebar and for several months these have just been straight up hardcore porn. Loads of penis enlargement ads featuring close ups of actual penises, lots of "hot women in your area" showing way more than I want to see when I just logged in to check how my family back home in Europe are doing.
Just because you don't see unwanted stuff on social media, don't assume other people have the same experience. Facebook are notorious for basically doing zero content moderation in Asian countries (see Myanmar), and we all know what's going on with Twitter. I'm pretty sure that within a pretty short timeframe it's no longer going to be borderline underage stuff in non English speaking parts of the world.
None of us know what kind of content moderation went on there. There is no public index of what gets removed, and according to Brandon Silverman, even Facebook itself does not review it:
> "violating content... in a lot of cases what happens is it gets removed, it gets taken down from the platform, and more often than not, essentially deleted, just disappears forever. A lot of that violating content is really important to the public interest, and it would be enormously valuable if we were able to create spaces for that content and the actors involved and the networks they create and build to be studied by an outside community, and an independent research ecosystem over time."
It's possible that whoever was removing content there was close to the military. We don't know. Other Asian countries could have similar influence. Isn't it accepted that Mark wants to bring Facebook to China? Maybe they wanted to see Facebook demonstrate its content controls before allowing them in. ¯\_(ツ)_/¯
 16:45 in https://podcasts.apple.com/us/podcast/the-lawfare-podcast/id...
Anyway, the point is that moderation of porn is widely socially accepted because it aligns with values that are widely held. And if you want to understand why people want hate speech moderated (for example), you also need to look at it through the lens of values and beliefs that people hold.
For example, the article points out that Apple could decide at any point to remove the Twitter app from their store for any reason. Such a reason could be that there is lots of porn on Twitter, and this could encourage Twitter to discourage porn on their platform, kind of like what happened with Tumblr.
As a consumer, I don't mind it if platforms censor spam and porn as long as there is a switch somewhere I can toggle that will let me opt out of it if I want to see spam and porn. That's just my personal preference and if enough other people express that preference then the culture around free speech will change.
In this sentence, you've gone farther than the article (and many similar posts) in defining how you'd trade off between free speech and other values. Extrapolating a bit, it sounds like:
(1) You believe "platforms" have responsibilities in free speech culture to make all submitted content available.
(2) It is ok for platforms to control default visibility of content based on what it perceives as pervasive values, as long as those controls can be overridden by users.
(3) You trust social and market competition among platforms to make sure there are platforms aligned with enough expressions of values that everyone gets speech.
I'd love to see a deeper dive on some of these points by cultural-not-legal advocates. Some questions I'd like to see vigorous discussion of:
- When does something become a platform, and start having responsibility to rebroadcast all submissions?
- How much friction is ok for a platform to introduce before it blurs the line with censorship? (Extra submission hurdles? Demonitization? Deamplification? Opt-ins vs opt-outs?)
- Are there categories of values that are ok to introduce friction around (e.g. porn) vs others that are not (e.g. politics)? How can we separate them reliably?
- What are the qualities of competition between platforms that need to be maintained to make sure the allowable friction reflects a range of cultural values?
As a speech-not-reach guy, my conclusion is that platforms are participants and inevitably express their own values through curation, so it's most important to keep competition alive at the platform level. However, I think there could be a better steel-man case for platforms having coherent responsibilities than I've seen. A lot of the discussions start strong and then devolve into breathless quotes about freedom.
As a small stakeholder, I naturally would prefer a world in which the other stakeholders share my values, because that would make the companies more willing to do what I want. Right now that means I would like the culture of free speech to change in my favor.
If I lived in a place were regular people liked free speech but the government liked censorship, then I would want the laws of free speech to change, and the culture I'd believe to be fine.
- I think a platform doesn't have a specific responsibility to rebroadcast everything. If I don't like what things they choose to rebroadcast I'll find them less useful and start using a different service.
- For content that I don't want to see, they should introduce any hurdle they want. I am only annoyed with censorship when they get in between the sender and the receiver without asking the receiver first. For example, censoring spam and porn is fine when done at the request of the user who would receive the spam and porn. Censoring misinformation is less fine because it has to be done without permission of the receiver. The receiver may be gullible and stupid, and then it looks like censoring misinformation is good. But sometimes the receiver is smart and better informed than the censors, and it's not easy to tell in advance.
- Same as the previous, the categories for which it is OK to introduce friction are those categories that the user who would receives the messages asks you to censor. For example when an ad is irrelevant there is often a button you can press to tell the platform that you don't want to see those kinds of ads, and then they start showing different ads. I would like something similar for spam, porn and misinformation.
- I don't know about competition, network effects seem very strong. Instead of having a special network only for special people who like free speech, I would prefer to change the wider culture so that the mainstream social networks support free speech. It's either that or wait for some crazy billionaire who happens to value free speech to buy the mainstream platform? Seems unreliable.
Citizens arguing about ugly topics like identity politics, conspiracies, coverups, etc. are de-facto pro-social - they are trying to sway public opinion on political issues, ostensibly to make their society better. It becomes anti-social when those arguments are being pushed by outside parties (Russian/Chinese/corpo propaganda) or grifters.
Pro-social speech can also become anti-social when people get too heated and start attacking others based on their beliefs. It's difficult to deal with speech that is both pro- and anti-social, in terms of trying to convey a sincere argument while also being toxic to those who don't agree. In cases like that, "rules of engagement" or a code of conduct should be implemented.
Anti-social speech is to say things without the goal of construction/progress, but instead with the goal of destruction or abuse. Spam and porn are anti-social in certain contexts and are treated as such, the same way drug use and swearing is (or maybe used to be).
Assuming you believe in the thesis that free speech prevents social collapse and totalitarianism, then it doesn't matter whether you disagree with or even hate certain arguments/views, if they are sincere then they are pro-social and should be given some platform to be heard and interacted with in the mainstream.
To a religious conservative, it would be pro-social to say “marriage should only be between one man and one woman”. To many progressives, this would be “anti-social”.
Here’s a conundrum. Is “we should kill all nazis” pro-social or anti-social? On its face it seems to promote violence, so maybe anti-social, but the people it’s targeted at themselves support horrific views.
As a Gen X libertarian, I am pretty much a free speech absolutist (I support minimal legal limits such as CSAM because there must have been victimization involved). It boggles my mind that so many young people today don’t support free speech; tides of culture and government change and if you allow free speech to slip away when trends support suppressing ideas you oppose, it won’t be there for you when ideas that you support are in the crosshairs.
I’d be curious to hear whether you think that this suspension is compatible with free speech absolutism? Put differently, was Musk simply marketing himself as an absolutist (to win over following) but is in reality a pragmatist, willing to bend his stance when he views it in his own or society’s best interest?
If liberals already hate Elon, then the best course of action is for him to pander predominantly to people on the right. Right now, the best way to do that is by advertising free speech.
Pro-social doesn't mean "positive" or "good" or "correct", it means that it's an attempt at progress or improvement for the community, from some sincere point of view.
"We should kill all nazis" is pro-social in terms of sincere critique, and anti-social in the threat of wanting to kill them. If we're at war with nazis then calling to kill them is likely mostly pro-social. If they're part of our tribe, whereby you are threatening your own people, then it's definitively anti-social. "We shouldn't accept nazis" is fully pro-social, and so is "we should all be nazis".
We can instinctively tell when someone is trying to be cooperative or offensive (attacking, deliberately destructive) to us in some way. We have the capability of recognizing it even if there are cultural/moral/logical (epistemic) divides. We just have to take the time to understand the other party's perspective and situation, then we can categorize their behavior.
I'm assuming what they did was something like write "There are only two genders" in an exam paper as a form of protest. If that's the case then I'd say what they're expressing is pro-social, but they're doing it in an anti-social manner, and should have been reprimanded on that basis.
And, assuming again that that's what happened, then it's definitely not "hate speech" - which is exactly what I just tried to differentiate in my previous comment.
But reading about this case, it seems that FIRE was trying to get TPUSA to be allowed as a student group on campus: https://apnews.com/article/scranton-28d927628ee14bf5aefbaaa7...
There's nothing there about them trying to defend students joining courses and being toxic in them.
This isn't taking place in course discussion. The course has nothing to do with trans people.
They take a course in a topic they do not care about that is taught by a trans person (or a gay person or a woman). They deliberately write statements specifically targeting the status of the professor in an extreme way but in legally protected places. They do this in an attempt to get the professor to react in some manner so they can sue to school for discrimination against conservative students. They are clearly coached by a legal team because 18 year olds do not actually understand the precise legal boundaries they can walk up to.
Ok, maybe spam can be defined more or less formally (but you will struggle to define "advertisement", I'm afraid, so, maybe spam has same problem).
But "pornography" is big can of worms. Each culture and each person have its own definition, and global platforms are, errr, global.
As result we can not post classical art to Facebook because these horrible woman nipples, you now. Pornography.
A social media site needs a sensible response to Michelangelo's David - hopefully not censoring it - but also needs a sensible response to Goatse getting upvoted to the front page by trolls.
(1) Where will you draw the line? Who will you ask? Will you ask Iranian or UAE users? Will you ask USA users (and USA is very puritan country, even in 2022, IMHO)? Will you ask users from regions of Africa where bare woman chest is norm (Bonus question: will you allow selfies of these users)? Why? Why not?
(2) Why can trolls upvote something to the front page of user? Why did something, not posted or liked by user's "friends" ("subscriptions", "connections", you name it), appear on front page of logged-in user? It is social network, not news portal ;-) And if over-the-line content is liked by user's "friends", user can unsubscribe at any moment, it is their decision to add content from offender into their feed in first place.
> Why does this matter? Because free speech, free expression in general, is a means to express values and beliefs.
Saying this does not convince people who are concerned about hate speech. Such individuals imagine that the government or the platform will, or can be coerced, to simply remove whatever they do not like. These folks, who consider themselves among the majority, never imagine that they will one day be a minority, or that such power will then be used against them. The truth is that all of us are being moderated all the time on social media, but that's hard to demonstrate in all cases at all times, which is why the secrecy of such moderation tends to be effective for a period.
Drawing the line at legal speech is important is because words are not violent, and moving it anywhere else leads to more disagreement. Other people's words, which you may find offensive and which may cause real psychological harm, are still discretionary. Some may find them harmful, others will not. Laws aren't supposed to be subjectively interpreted. The exception for when speech can be punished by law is defined by Nadine Strossen as words that "in context, directly causes specific imminent serious harm" .
> No one values spam, so no one writes hand-wringing essays about whether it is ok to kick spam and spammers off social media.
That's not really what happens though. Spam is a useful mechanism to get platforms to build more tools that secretly remove content. I mention it in my talk at 28:00 . These new censorship tools are mostly used to suppress speech from individuals, not spammers.
> Most people recognize that porn, even if they like it, is not something that should be in everyone’s faces all the time, so there is little existential concern for free speech as a whole when porn is excluded or hidden from some contexts.
That's not true, "obscenity" has always been under attack. Look up Anthony Comstock  , a largely successful crusader against all things he found obscene. He didn't stop at pornography, he went after literature describing contraceptives, abortion, and even people who just criticized him. It was just like how today's radical trans movement seeks to remove voices of detransitioners from social media . It doesn't fit their world view, they find it offensive and they don't want you to see it. That said, there is broader agreement about keeping pornography away from children.
The vast amounts of censorship today, most often secretly done, cuts out the middle and enables both extremes to isolate themselves in their own bubbles. Instead of trying to baby-proof our world, we should be world-proofing ourselves 
> You cannot expect to make sense of free speech if you try to ignore other values.
Nobody's saying we should ignore other values. If you hold this view then you've misinterpreted the constitution, whose 9th amendment  states that rights are not to be held in conflict with each other.
What should be said here is, you cannot make sense of free speech if you do not examine its history and how it's been relentlessly defended, not just in the US, but everywhere possible for arguably the whole of human history, with varying degrees of success.
One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.
It's the constitution, innit? It's a confusing document, with many compromises. But many US citizens treat it like the Shroud of Turin. The German laws are tied up with Germany's history in the 20th C. In other parts of Western Europe, restrictions are less stringent and less specific.
Obviously there are types of "harmless" speech that are likely to provoke a violent response. Personal abuse is an example, but so is the language of the likes of Enoch Powell's "rivers of blood" speech. Do people really think that speech that's likely to cause public disorder is OK?
I like the author's distinction between "cultural" and "legal" freedom of speech. I instinctively dislike restrictions on speech; but provided they are used mainly to nudge people into treating their fellow citiziens with respect (and with an explicit exception for satire), I see them as an overall good. That is, you can use legal restrictions, with a light touch, to encourage an atmosphere of cultural restraint.
Arresting that guy who protested Charles III's accession by holding up a blank sheet of paper: that's not a good thing.
> It's the constitution, innit?
Is it? I wasn't aware that hate speech was explicitly singled out as protected in the Constitution.
> Do people really think that speech that's likely to cause public disorder is OK?
Public disorder, no. But innocent people getting heard? Well, maybe. Lots of speech that hurts innocent people is already banned. Things like libel/slander. Threats. Blackmail. All of these can hurt someone, silence someone, inspire someone else to hurt someone. I don't think anyone ever objected to these being banned, and even punishable with real prison sentences.
> Arresting that guy who protested Charles III's accession by holding up a blank sheet of paper: that's not a good thing.
Did that happen? That's a silly thing to arrest someone for. It calls back to Putin arresting people for holding up blank sheets of paper.
Those are civil offences; they're not "banned", the victim has to sue for damages.
> Threats. Blackmail.
Those are indeed banned, i.e. criminal. They are both effectively threats. It's a criminal act to threaten a criminal act.
> Did that happen? That's a silly thing to arrest someone for.
Oh yes, it did:
"Silly" is rather a gentle way to characterize it; it was oppressive. But this country is very silly about it's monarchy - as the USA is about it's constitution.
I am a republican (small 'r'), and using the police to suppress anti-monarchist sentiment is offensive, and to me scary. Anti-monarchist opinions have a long tradition here; there was some other king called Charles, that we decapitated 400 years ago.
[Edit] No threat to Charles III intended; got to be careful what you say, if you can be arrested for holding a blank piece of paper, maybe you can be arrested for having a blank mind.
Is that not still a form of punishment? And a reason why most people would refrain from saying these things?
> > Threats. Blackmail.
> Those are indeed banned, i.e. criminal. They are both effectively threats. It's a criminal act to threaten a criminal act.
A threat is a threat to do something illegal. Blackmail is a threat to do something that's often completely legal, but sufficiently embarrassing that it would still make the target change their behaviour.
But as with libel and slander, the fact that they're illegal doesn't mean they don't happen. In fact, there's plenty of people threatening others online. And often it takes a judge to decide whether or not that threat counts as a credible threat. And then there's indirect threats, where the threatener doesn't directly threaten, but directs others to do it. Possibly in an unspoken way. And sometimes the vague things someone says can still inspire a mob to storm the Capitol with a gallows from which to have the Vice President.
Where do you draw that line? And can you only draw it after people act on it? These issues are incredibly complex, and very context dependent. And pretending they're simple is going to make us ignore all the edge cases. And there's a lot of those.
No, it's not. It's compensation for damage done. If a bad driver bends your car in an accident, they don't get punished, they just have to pay for your repairs.
> A threat is a threat to do something illegal. Blackmail is a threat to do something that's often completely legal
You seem to have contradicted yourself; if a threat is a threat to do something illegal, how can a threat to do something completely legal be a threat?
> But as with libel and slander, the fact that they're illegal doesn't mean they don't happen.
There is such a thing as criminal libel, for which you can be punished; but in the general case, neither libel nor slander is "illegal". They are torts, or "wrongs", for which a victim can demand compensation. Criminal libel has been abolished in my jurisdiction. I don't think it ever existed in the USA. [Edit] I'm wrong; some states apparently have criminal libel statutes.
> And can you only draw it after people act on it?
"Threatening behaviour" is a criminal offence here. It generally means placing someone in fear of physical or mental injury. You don't have to actually injure them to commit the offence.
> make us ignore all the edge cases
I guess that's why libel cases often cost lots of money in lawyer's fees. Equity and tort are two fields of civil law with lots of edge-cases. People litigate these cases precisely because the lines aren't drawn clearly. They tend to turn on issues like fairness, issues that are ultimately a matter of judgement.
But you're still being held liable for that damage, despite the fact that you did not cause any physical damage. It was just words, and yet their effect is considered damaging.
That's aside from the fact that there's also a thing called "punitive damages", so that still is quite explicitly punishment. So these are words that do damage, and they're punished for.
> if a threat is a threat to do something illegal, how can a threat to do something completely legal be a threat?
I guess I should explain the terms I use a bit better. You're right that words can mean different things, and I'm talking about the legal concepts of threat and blackmail, but in my too-brief explanation of blackmail, I use the word "threat" in one of its other meanings.
So here's what I mean by threat and blackmail:
Threat: expressing an intend to hurt someone. Here, hurting someone is illegal, but expressing the intention to do so is illegal as well.
Blackmail: expressing an intend to reveal embarrassing information about someone unless they do something, in order to coerce them to do that thing against their will. Here, revealing the embarrassing information might be totally legal. In fact, if the information is about a crime the target committed, revealing it might even be the right thing to do. And yet saying you're going to reveal it unless they do a particular thing, is illegal.
In any case, I think we have established that there are forms of speech that are illegal, are considered damaging, and/or are sufficiently harmful or damaging to warrant compensation or punishment.
So to get back to the original topic: is hate speech damaging? And is it sufficiently damaging to warrant compensation or punishment? And if we were to conclude that they are, legislating that is not necessarily a more significant infringement on the freedom of speech than the existing laws against libel, blackmail and threats are.
It is a complete defence to an accusation of libel that what you wrote is true.
That is, libel means publishing damaging lies about someone. In most jurisdictions, it also means that the lies were malicious: the writer or publisher intended to cause damage. Damages are assessed as lost money; if the lies hurt your feelings, you won't get damages. But if your hurt feelings required therapy, you can sue for the therapist's fees.
It's annoying (understatement) that damage to reputation is assessed in this way; it means that rich, famous people get much larger awards than ordinary people, because they lose more money from damage to reputation (they have more money to lose, for one thing). You can't sue someone for torpedoing a $1M deal if you aren't the kind of person that makes $1M deals.
Obviously. It would be a bit too dystopian if the truth was illegal (though in oppressive regimes it often is). I agree that it's a problem that harmful lies about rich people are punished harder than lies about poor people, because they can attach a larger monetary value to the damage. That's of course a product of the fact that it's a civil issue and revolves around damages. Getting rid of that artefact would probably involve making all lies illegal, and I think everybody here agrees that that would be a couple of steps too far.
And then there's the fact that the entire justice system is simply far more accessible to rich people. And they do sometimes use it to try to suppress inconvenient truths.
Libel and slander are civil matters in the United States.
You're allowed to think whatever you want, but you're not allowed to publicly say it, unless it's within the bounds of legal opinion (which are pretty wide, it's not like you're only allowed to state one opinion, but they are also clearly limited). Insults, while clearly speech, are forbidden and it's part of the penal code (up to one year in prison, but it's usually settled by fines), so not a civil matter.
What Germany certainly doesn't have is a culture of free speech. It's important to remember that the last totalitarian dictatorship in (East) Germany only ended 30 years ago, and there was very little freedom of speech there, and the limits were violently enforced.
German culture highly values conformity, and it's no accident that the spiral of silence was proposed by a German researcher (Elisabeth Noelle-Neumann), it essentially argues that people check their views for majority-compatibility and will stay silent if they find that they're not accepted. Because they stay silent, others who think like them will find their ideas not accepted in the majority and will also stay silent etc etc, as an act of social survival (which it certainly is in Germany, but isn't as much in countries with a culture of free speech).
> speech absolutism and value neutrality of the constitution of the Weimar Republic
The Weimar Republic had largely the same fundamental laws regarding freedom of speech as the Federal Republic of Germany today: you're allowed to say what isn't outlawed by a law, and you must not be punished for saying it.
So in legal practice the right to human dignity overrides the right to free-speech.
For example, if you look at countries like Russia, China, Iran, Myanmar, etc... what's illegal or repressed is any promotion of opposition to the government. This is plainly anti-democratic & authoritarian.
Whereas in a lot of Europe it is illegal to promote only extreme ideologies but apart from that the democratic discourse is as messy and varied as you'd expect it to be. (And due to variety of electoral systems I'd argue that political discourse in much of Europe is mush more varied than in places like the US.) We have countries swinging from left to right wing governments and even powerful political parties that started as far right wing or terrorist groups.
What we all see is how the balance can be tipped from fragile democracy to repressive authoritarianism e.g. Hungary.
Democracy is a constant effort, requiring constant vigilance. These are all discussions worth having, at least in good faith.
Here's former ACLU executive director Ira Glasser in a podcast at 1:49:10 , transcript :
> Times writes back and says, we can't publish this ad because we – because you criticize Nixon and it's 1972, and it's in the middle of an election campaign, and if you criticize Nixon, unless you have his permission, it counts against the ad – the expenditure limitation of his opponent. I said, I have to get his permission in order to criticize him, what are you, crazy. It's the same issue that Cambodian bombing ad...
> So, we file a lawsuit, we win that lawsuit, and we strike the law down for the second time. This goes on over and over again, and I'm saying to myself, this is a law that is supposed to get in the way of nasty rich folks, and the only two times it's been used so far is against these three aging radicals and the ACLU on totally legitimate, core First Amendment speech, how can that be, and that's how we get into this issue, and why we see it so clearly as a free speech issue. It goes on and on and on and on, and they kept coming back, doing it again, and again and again and again in 16 different versions.
Earlier in that podcast episode he makes a good case for campaign finance being like the right to travel around 1:36:10 . Have a listen, you might learn something. The embedded player on this page  lets you skip around the episode without subscribing to Soundcloud.
This is an argument against laws, corruption, and government. It is not, however, an argument against any particular position on speech. A capricious supreme court could revert to the speech precedent the US had for the first 150 years of it's existence, which we both presumably agree is flawed. Laws can be written and misapplied.
Neither of those things being possible suggests that, as a value, free speech maximalism is superior to free speech...almost maximalism. It's clear that you can have stability and discussion and support for fairly extreme ideologies under the more restrictive speech standards of Europe.
No, it's an argument for studying the history of law and understanding that some are good and some are bad. They don't come with a cover sheet saying so. You need to figure it out for yourself.
> Laws can be written and misapplied.
All the more reason to study them and give careful consideration to how they have been misapplied in the past. We're in agreement there.
You're a different commenter than above, so I'm not sure if your main point is to agree with them, disagree with me, or both, and on what exactly.
> It's clear that you can have stability and discussion and support for fairly extreme ideologies under the more restrictive speech standards of Europe.
I think any anti hate speech laws are more experimental than the American experiment. The point of free speech is to protect minority voices. It was championed in the '60s when the left needed it, and it rose from the ashes in the early 1900s after Comstock tried to surreptitiously-but-not-so-surreptitiously squash it. The censor never calls himself a censor.
The modern American jurisprudence on speech is fairly recent, as I alluded to above. There's tons of cases through the 18th and 19th century where the supreme court upheld blasphemy and obscenity convictions, allowed the federal government to censor speech, favor Christian religion and churches over others, and even consider libel a Crime.
As late as 1951 (Dennis v. US) the Supreme Court ruled that members of the US Communist Party could be imprisoned because socialist organizing was a threat. And it took 20 more years for Brandenburg to cement the modern jurisprudence, 200 years after the American experiment began, and only 50 years ago.
> No, it's an argument for studying the history of law and understanding that some are good and some are bad.
No, my point is what is socially acceptable goes beyond simply what is legal. And this is always and has always been true. Many laws can be used for tyranny. If you want to avoid the potential for tyrannical laws, you can't have any laws, and that doesn't work either.
Like again, this whole thing comes down to the fact that Western European countries (and the US!) have had more restrictive definitions of free speech for far longer than Brandenburg has been the law of the land, and they aren't tyrannical.
Do you agree with how US jurisprudence has drawn the lines on free speech? If not, what would you do differently?
>...what's illegal or repressed is any promotion of opposition to the government. This is plainly anti-democratic & authoritarian.
What happens in democracies (or at least in the US) is that this just gets branded as "mis" or "disinformation", or worse a "conspiracy theory". A formulation I'm fond of is "this isn't happening, and it's good that it is", because it's a pattern repeated throughout political discourse. Depending on who is talking, something is either a crazy extremist conspiracy theory, or something we should all be cheering on.
It's a feature of humanity from where I stand. People crave both stability and change. Beating them over the head to force stability doesn't work so well, nor does changing too quickly. The idea of a system that encourages peaceful transitions is to find some balance. Of course, that balance can also be disrupted.
Societal totalitarianism is a very real concern. In its benign form it leads to quirky cultures and in its extreme leads to total suppression of any opposition to the zeitgeist, even without a fully authoritarian government. Unfortunately there's a massive fuzzy grey boundary between the two and it's up to the people of a country to course correct when they can, as getting back from a restrictive society is difficult.
This is a good phrase; I'm stealing this. I agree with your diagnosis of it, as it can indeed be benign and result merely in "people being weird".
Sure, and America hasn't become a country of hate speech or devolved into tyranny yet either, despite what some claim, so do those laws actually add any value? Seems speculative at best.
> One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.
Some of their first moves after taking power was crack down on speech and imprison people that spoke out against them. What would have happened if free speech was a strongly held principle among the people such that they wouldn't have tolerated that? What if dissenting voices had been permitted to speak about and expose the secret death camps?
Hate speech spewed by the likes of Tucker Carlson caused a mass shooting like a week ago...
Furthermore, drawing a causal link between Tucker and the shooter is speculative at best.
Also, the people who watch his show agree with his speech and probably don't consider it stochastic terrorism because they also want to genocide trans people. It's not a hard concept to grasp. It's a hate-hour for a very specific slice of the population. And that slice is committing more and more acts of violence.
Do I think Tucker is DIRECTLY responsible for the shooting? Maybe not, like you said it's speculative. But do I think there is a casual link between the entire "Trans people are evil" rhetoric crowd and the shooting? Absolutely, I think it'd be difficult to argue the other direction, we have studies and statistics on trans hate crimes already.
And then do I think Tucker Carlson is part of the "Trans people are evil" rhetoric crowd? yes, hard to deny that imo. And you said it yourself, he's the largest voice out there in that crowd. So I think he definitely shoulders some blame.
Also, speculation turns out to be more and more true each time. conservative talk show hosts are cited in conservative domestic terrorists manifestos all the time. This "hour of hate" is absolutely directly contributing to violence.
If bluntly disagreeing with progressivism makes you blame-worthy for mass-murderers, well then we should throw all conservative media hosts in indefinite prison. Or execute them.
"The extent of some liberal ideas" is a really soft way of putting "open racist and transphobe" He literally did a segment on the "Great Replacement Theory".
He's a white supremacist. He can get absolutely, thoroughly, and without lube, fucked.
Tucker Carlson thinks that immigration is ending the white race. That's a direct quote, go watch the segment.
I didn't hear this "direct quote" of yours anywhere. Tucker doesn't mention the "white race" at all, he speaks about government policies and how they influence native population growth (Americans born to Americans), and about the rate of immigration. If I missed your direct quote, then please provide a timestamp. While I find many of the supposed factual claims there debatable, this argument that too much immigration can be destabilizing is not obviously false.
What I've seen time and time again in situation like this, from both sides, are claims of "dog-whistling", wink-wink-nudge-nudge "we know what you're really talking about [insert-person-I-hate]".
I'll also note that "demography is destiny" has been an unofficial Democratic party tag line for decades now. The notion that Democrats have more vested interest in increasing immigration is not exactly a conspiracy theory, although given how the Latin population is now leaning Republican I think they're realizing that that tagline is deeply flawed as an actual strategy.
Here's a short one minute blurb, should be digestable enough for you.
"The Great Replacement"
These are not dog whistles. They're not subtle. He's just being racist.
> You seem like a positively delightful person.
At least I'm not a racist xenophobe.
This is what happens when you form ideas divorced from any evidence. More Democrats watch Fox News and Tucker Carlson specifically than any of the more liberal-leaning networks. Do most Democrats want to genocide trans people too? Or are you claiming they're watching Tucker ironically?
> And then do I think Tucker Carlson is part of the "Trans people are evil" rhetoric crowd? yes, hard to deny that imo.
It's easy to deny it actually, he has never called for violence against trans people to my knowledge, and in fact has always condemned violence. People are not so circumspect if they think they're legitimately fighting evil.
But I am not a historian and cannot reconcile the articles content with what you’re stating here.
You're as free as any historian to form an opinion and share it. The rest of us can take the fact that you're not one into account while reading it.
Thank you for sharing a link to this series, I was not aware of it.
Imagine saying this about microbiology or something.
Dismissing people by saying things like "imagine joe six pack sharing his opinion on microbiology" is part of what made the anti-vax or vax-skeptical crowd so upset. In many circumstances, the rest of us did not meet them where they were. That requires listening, and yes, encouraging uninformed people to express their opinions.
It can be popular to behave dismissively online because it seems like you don't need to deal with the aftermath. When everyone does it, that adds up.
I'm not saying everyone needs to be out there correcting misinformation, just that we deal ourselves a better hand when we accept that other people get things wrong. It's okay for people to be wrong, even in the absence of being corrected. Telling people to shut up doesn't nip the virus in the bud, it is the virus.
I hear that a lot nowadays too. I wonder why that is?
There are plenty of critics living in those countries who think it's a bad idea to limit free speech beyond limiting imminent incitement of violence. Nadine Strossen discusses this in, I believe, her appearance on the Higher Ed Now podcast , including this excerpt,
14:20: "Countries like Germany which were in the vanguard of enacting and enforcing laws against hate speech were in the rear guard when it came to outlawing actual discrimination against people in the workplace, in places of public accommodation in housing. It wasn't until very shockingly recently that Germany was dragged kicking and screaming by the EU to adopt those laws… targeting expression is only a superficial manifestation of a deep-seated problem, and it also is divorced from the actual real world consequences of discriminatory and violent conduct."
The Weimar Republic had hate speech laws. Nadine makes the case  that photos of Hitler imprisoned for running afoul of those laws helped him rise to power.
It's not so unlike Milo receiving attention for being prevented from speaking at campus. The protesters do not realize their dogmatism and censorious behavior bring Milo more followers. Protesters should consider how their behavior will be perceived  by someone who knows nothing other than the fact that one person wants to speak, there is a willing audience, and that protesters either: stopped that conversation from happening by being noisy, or responded with counter speech and letting haters bury themselves . If you are shouting over someone, you become a hater. It doesn't need to be that way.
But in terms of equality, I'd say what we get in practice *today* in Germany and Switzerland (don't know about Austria) is a society that is arguably *more* equal than the vanguard.
The catch here is that people are not aware that their content is frequently shadow moderated. Even if you do try to engage with vitriolic commentary, your comment may be removed without your knowledge, and the forum will still show you your comment as if it's not removed. It works like this on Reddit for all comments , but again as I said, all forums now engage in shadow moderation. And at the end of the day it is also an us problem, not just a them problem, because we're often the ones reporting the content that gets taken down. When people are made aware that this is happening, they do call for change, for example ,
> ...what is the supposed rationale for making you think a removed post is still live and visible?
> ...So the mods delete comments, but have them still visible to the writer. How sinister.
> ...what’s stunning is you get no notification and to you the comment still looks up. Which means mods can set whatever narrative they want without answering to anyone.
Here's another way to think about whether or not to use shadow moderation. If you think it's a good idea to use it, then you think it's an effective tool for changing minds. Your ideological opponent, whose comments you want to secretly hide (lest they give you more trouble), is, by your definition, immoral. The extreme left and right both think this of each other. And since they are immoral, they will make more immoral use of this tool than you do, thereby effecting more influence than you, which is precisely the opposite of what you set out to achieve. Therefore the only winning move is not to play . You cannot support the use of a tool in the world without your opponent eventually getting their hands on it. So it's really a game of: who can make the first truly free speech forum that encourages rather than secretly shuts down debate?
Is that really the catch? Reddit didn't get to where they are today by saying "we totally allow free speech". Since its inception, it's been a trope that Reddit actively moderates away actively harmful content, and even constructive-but-contrarian content to the point where every small subreddit community is a circlejerk/echo chamber for its own ideals. Anyone that just downloads the app might not know it at first, but if they actually dive into creating an account and joining subreddit communities, they'll learn soon enough since moderators try not to make their own community mad by silencing all criticism of themselves (when they do, it tends to be exposed pretty quickly), and any huge shifts in policy tend to sprout divergent communities, like how r/superstonk was created out of a wallstreetbets moderation issue.
I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered, with the exception of legal porn on social media apps. It's just the way that anything and everything gains enough traction to become relevant among the masses, and they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images right next to pictures of puppies with 0 algorithmic filtering or sorting.
Yes, users don't know about this. That is clear from the above quotes.
> Reddit didn't get to where they are today by saying "we totally allow free speech".
I don't know whether shadow moderation was necessary for Reddit to grow. They certainly don't inform users about it. That's manifested by the secrecy inherent in the feature itself.
> I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered
People expect that authors are informed when their content is removed. That isn't happening with any consistency on any of the platforms, and it's built into the system. It is not a choice made by mods.
> they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images
That's just the result of sidelining today's unpopular extreme. At both 60 and 120 years ago, information about gay marriage or contraception was considered to be immoral by those in power. It depends who's in charge. The environment will flip at some point, so don't give up your free speech principles. The shoe will eventually be on the other foot.
The refrain "I believe in free speech, BUT" is so common that first amendment lawyers regularly joke about it   .
"Free speech for me— but not for thee" is also the title of an excellent book by Nat Hentoff 
Personally I like how ACLU founder Roger Baldwin put it to historian Arthur Schlesigner, Jr.:
> Arthur: "What possible reason is there for giving civil liberties to people who will use those civil liberties in order to destroy the civil liberties of all the rest?"
> Roger: "That's a classic argument, you know. That's what they said about the Nazis and the Communists, that if they got into power they'd suppress all the rest of us. Therefore, we'd suppress ‘em first. We're going to use their method before they can use it.
> Well that is contrary to our experience. In a democratic society, if you let them all talk, even those who would deny civil liberties and would overthrow the government, that's the best way to prevent them from doing it." 
This isn't empirically true though, as someone else mentions in this thread there's a fair amount of evidence that past a certain point, more free speech makes fomenting a populist revolt to end democracy easier, not harder, and that if democratic stability is what you're opting for, something more akin to western Europe's laws would be better than the US's.
Like, as much as I like the ACLU, the argument you're presenting is functionally "trust us", which, no, I don't trust the free speech advocacy org to have an objective view on the dangers of unrestricted speech. It is contrary to your nature.
Since you must have already read my above comment about punishable speech that "in context, directly causes specific imminent serious harm", you must now be arguing that there are words beyond that which cause violence. That's wrong. Words are not violence. When you punish speech that does not cause violence, you turn into the censor. Consider how you sound when you argue this point . It draws people towards the cause you seek to censor, not away from it. You enable your opponent, just as Anthony Comstock did over a hundred years ago. People root for the underdog, they want to hear both sides, and to be trusted to make up their own minds. When you take away their ability to choose, you become their enemy, not their ally.
> the argument you're presenting is functionally "trust us",
Fundamentally, this is the exact opposite of the point I'm making. The censor is saying, "trust me, this was bad, seeing it will hurt you". Free speech advocates are saying, "make up your own mind" and that you have the right to associate with whomever you please.
> which, no, I don't trust the free speech advocacy org to have an objective view on the dangers of unrestricted speech.
Free speech does not mean unrestricted speech, as mentioned above.
> It is contrary to your nature.
It sounds like you feel there is no common ground to be found. Yet we are all human. We have both similarities and differences that can be discovered through conversation, some of which may be easy like this one, some of which may be hard. Giving up on that is the equivalent to saying we should either all duke it out or live separately. I'm probably not the most difficult person debate-wise you'll ever meet. You might as well start somewhere.
> Words are not violence.
Sure. But I never claimed as such. I said words could cause violence. Combustion is not flight, and yet combustion, in the right circumstances, causes flight. Two things need not be equivalent for there to be a causal relationship between them.
> you turn into the censor.
Assuredly, but here you're begging the question. Why is becoming the censor an ill that must, at all costs, be avoided?
> It draws people towards the cause you seek to censor, not away from it.
This is a common claim, but we know it to be untrue. While certainly, some people may be drawn to the restricted section of the library, fewer people will ultimately here the material than if it's being preached about on the street. We see this with censorship all the time, both empirical examples (censorship of sex-related topics in religious communities) and quantifiable ones .
> Fundamentally, this is the exact opposite of the point I'm making
No, I mean that in the quote you provided, his claim that "in our experience, actually censoring makes things work" is totally unsupported. You have to trust him that that's true, and of course the guy who runs the free speech organization is going to say that censorship is bad. If he didn't believe that, he wouldn't run the free speech organization. My simple question is "what if there's a point after which free speech actually makes things worse"?
I mean you already agree that such a point exists: imminent violent action. But presumably I could come up with other examples of things you'd be okay with censoring (CSAM is a common example). If you're okay banning that speech, what's the harm in moving the needle a bit in one direction or the other, especially if moving it results in a society that is more stable?
I think there are good and nuanced answers to all of those questions, but you aren't even engaging with them because you seem to be claiming that, without exception, it is obvious that the optimal and least harmful choice when picking what speech we should ban is "imminent violence incitement and nothing else", and when I ask "why set the bar there", you quote someone who says "trust me". That's not convincing.
Your line of argument ends in tyranny. "We must protect you for your own good, or better, the greater good."
Specifically with the irony that while you decry "trust us," it is precisely what is required to accept your "moving of the needle a little bit."
> Specifically with the irony that while you decry "trust us," it is precisely what is required to accept your "moving of the needle a little bit."
Not at all because, I have not argued for any particular movement. What I'm asking for is a argument that the status quo is optimal that is more robust than "trust us the alternatives would be worse", especially when there are fairly good examples of the needle being in other places in other nations and it being, generally speaking, fine.
Why? Nobody said it's written in stone. Laws remain open to interpretation for as long as a judiciary exists.
> there are fairly good examples of the needle being in other places in other nations and it being, generally speaking, fine.
It sounds like you have a particular idea about what should be changed, yet you are reluctant to say which one, and you simultaneously want everyone replying to you to address all of those possibilities ("you aren't even engaging with them").
In other words, you are placing all responsibility upon your interlocutors to anticipate your thinking and none upon yourself.
Not at all.
I mean yes I have opinions, but I'm not advocating for any particular opinion here. I'm asking for you to justify yours with something better than the ACLU president having said "trust me". Why is the line where it is in the US better than the line a little to the left or the right of that? What makes the choice to ban the speech we do and allow the speech we do, as opposed to more or less socially optimal?
Fundamentally, I'm not asking you to anticipate my thinking, I'm asking you to catch up and engage with the questions I've already engaged with (some of which I asked upthread!). Because it is unsatisfying that the only answer you can provide to "why should we do things this way" is "trust us, the alternatives would be worse".
This article is from FIRE, not the ACLU. The ACLU now interprets civil liberties and rights to be potentially in conflict with each other, which is part of why FIRE got started and grew so much.
> I'm not asking you to anticipate my thinking, I'm asking you to catch up and engage with the questions I've already engaged with (some of which I asked upthread!)
The questions you asked were already answered before you asked them. You did not absorb the discussion above before you started asking questions, and are now asking to be spoon fed answers. Take some time to read the other comments above in this chain, check out the sources, listen to FIRE speakers on why current jurisprudence is where it is. Nobody is saying "trust us", they're saying, "this is what I think, and here are the sources that inform my thinking. You can check them out and decide for yourself whether you agree or disagree."
You've ignored all of this and insist that someone must tell you why they are right. Nobody can decide for you what's right. Opinions are subjective. Make up your own mind, nobody can do it for you.
Your second point is impossible to argue, because you request that someone argue against a subjective and infinitely definable 'optimal' that you projected.
Where is the evidence of this?
Exactly. And it isn't even that difficult to imagine how it could happen: TERFs, or trans-exclusionary radical feminists, want to shut out trans voices, and they do this by accusing trans people of engaging in misogynistic hate speech whenever they speak out about trans issues. How sincere the TERFs are doesn't matter, ultimately, as the result is the same: Trans people being shut up and shut out in the name of keeping misogyny off the platform, or even out of the country. In the words of Vox:
> TERF ideology has become the de facto face of feminism in the UK, helped along by media leadership from Rupert Murdoch and the Times of London. Any vague opposition to gender-critical thought in the UK brings along accusations of “silencing women” and a splashy feature or op-ed in a British national newspaper. Australian radical feminist Sheila Jeffreys went before the UK Parliament in March 2018 and declared that trans women are “parasites,” language that sounds an awful lot like Trump speaking about immigrants.
The "Defend Women" line is a perfect tactic for TERFs to use, and they use it early and often.
Again, claiming those people are cynically using feminism to fight a different battle is pointless: The fact they could use hate speech laws to silence a minority is the troubling thing.
Yeah, it's wild. Meghan Murphy  has a lot of guests on who discuss this topic. Some of it is pretty eye opening because the issue is only briefly touched upon via established media, aside from right wing sources. I guess that just provides opportunity for independent journalists to grow by telling such stories. I never believed that other media would turn away stories based on ideology until I heard stories from child transitioners who later detransitioned. And I only got plugged into that by randomly reading commentary about Cloudflare blocking Kiwi Farms . Anyway, it turns out that institutions, made up of humans, display human-like flaws. Who knew?
> The fact they could use hate speech laws to silence a minority is the troubling thing.
It is and it isn't. I think society largely agrees that, at least, most words are not violence. And while we do dip into pretty heavy censorship territory once in awhile, and it does breed a lot of distrust and hate, it also garners more support for free speech. It's clear to more and more every day that censorship, particularly the secretive censorship that pervades social media and IMO is the biggest issue which we should therefore tackle, does not work. Disinfo experts have been promising for years that just one more round of censorship and elections will finally, once and for all, kill off all hatred and discontent, just like people who claim AGI is just around the corner. It hasn't happened, it's not happening, and it's time to go back to our roots, free speech and counter speech.
What she actually said was this:
"When men claim to be women [...] and parasitically occupy the bodies of the oppressed, they speak for the oppressed. They come to be recognised as the oppressed. There's no space for women's liberation."
She is using this metaphor to point out that once men declare themselves to be and are accepted as women, they benefit from all the rights that generations of women have fought for - all while curtailing the means for women to speak about themselves, their lives, and their concerns as women.
Which, as I'm sure you'll agree, is a reasonable feminist viewpoint.
I wish del.icio.us still existed. I just want to put links in folders and share them.
Might have a better argument with gore instead of porn.
“What is your name?” can be illegal speech when directed to a 12 year old on the internet in the US.
This is itself a strawman.
No one thinks Musk will permit it on Twitter. The gutting of the moderation teams who tackle it is the concern. An underenforced rule is often not a very effective one.
Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.
Most of are just fine with limiting speech when it's for the public good. We don't want companies to be able to lie about their products, so we support laws against false advertising. We're okay with people getting arrested for making bomb threats. We even approve of compelling speech when it means forcing companies by law to label the ingredients of their products!
Left unchecked spam makes the services we love unusable. It can prevent us from having those same discussions we flock to social media platforms to try to engage in.
Social media platforms should provide a constructive environment where ideas can be freely discussed and that means moderation is necessary for things like keeping out spam, keeping discussions on topic, and even filtering out trolls.
Content moderation is essential to a healthy forum, but obviously care has be taken to maintain a balance between restricting and allowing content guided by what would best facilitate the constructive discussion of ideas, and not based on personal feelings about the ideas themselves. It's not an easy task, and it's thankless work, but if it wasn't for moderation none of us would be here talking about how spam should or shouldn't be allowed to ruin everything.
I'd edit this. "Most are just fine with limiting speech when it isn't them." The "public good" to, say, Andy Ngo is going to involve throwing a whole lot of leftists in prison.
This whole thing is just about having the guy you like be the one making the decisions.
At what point does it become spam and we restrict what these people view as free speech?
I have a stunning revelation to make:
I am not in favor of allowing all legal things on the principle that because they're legal, they should be allowed. In fact, given that the thing you're replying to is arguing in favor of moderating SPAM, I figured it would be apparent. Apparently, it is not.
Furthermore... Like SPAM, harassing someone in the DMs by simply calling them homophobic or racial slurs is not "free expression." It's harassment. Harassment, libel, etc. are not things that everyone (hopefully not most) who believe in free expression as a principle are trying to defend.
I'm mostly not going to address the "lots of people have told me" and disregard it, since it's probably made up for the bait. But if not, I mean... Good for all of those lunatics, I guess. I'm not associated with them, and I don't like Twitter or any social media platform to begin with.
I think you can meaningfully distinguish individuals expressing distasteful speech from coordinated campaigns and harassment, and spam falls under the latter.
The "oh but lefties are mean too" argument immediately retreats from the idea of free speech absolutism. I'm down for that.
That's not where I would draw the line against free speech absolutism. Insults or rudeness from individuals should be permitted, insults from groups/coordinated campaigns is where I would draw that line because that starts crossing into inciting a mob. Mobs of any political persuasion are undesirable.
Uncoordinated-coordination seems like an emergent phenomenon of social media though, which is why this is a tough issue. It's almost like we need some kind of back pressure against virality to keep mobs in check.
That all having been said... SPAM and harassment are not problems because of the expression itself, they're problems because of the disruptive patterns of behavior. The point is not that you can't say something, or have a given opinion, or etc.
I'm not really sure how this came to be everyone's ultimate catch 22 on free expression when there's more obvious caveats, such as how arbitrary the law is. But as arbitrary as the law is, it's like gofmt. Nobody's favorite, but everybody's favorite. (This is possibly one of the worst HN analogies this month, which now that I think about it, should probably be a thing someone tracks.)
Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.
Also, I'm not sure why you chose the word arbitrary. That's not what arbitrary means. Obscenity laws aren't arbitrary at all, they are based in specific judgments related to a community's perception of what is and isn't acceptable. I'm not saying obscenity laws are good or especially well-reasoned, but they are clearly not arbitrary. Perhaps you meant subjective/un-objective?
> Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.
The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.
(I realize that commercial SPAM is possibly what you are referring to here but... That sort of SPAM is more or less permitted on social media, so it's kind of neither here nor there.)
This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.
The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?
>The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.
You have conveyed that but it is not a useful metric by which to filter things from an absolutist standpoint because you have to make a value-judgment on the worth of the speech in regards to the venue... exactly what I said before.
>This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.
It's not that its disruptive... its that its harassment which is already a civil action and likely criminal in your jurisdiction as well. If you are gonna talk about how arbitrary laws are... maybe know a law or two?
>The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?
I'm not Elon Musk saying Im buying Twitter in order to support free speech... so don't look at me! I don't have problems with content moderation because I'm not naive.
You might have wanted to say that no one can agree on what speech is trolling and what speech isn't, but that's not because the word is vague. It's because people disagree on the deliberate and the upset part.
There may be a good moral argument to draw that line differently when the consequence is a ban from a commercial platform vs being locked in a prison.
No definition is as concrete, unmallable, and unchanging as you seem to imagine.
Truthfully, the two observations are related: The word "trolling" being kind of vague is probably the main reason why people do not agree on what actually constitutes it.
what then, in your mind, is the clear, precise definition of spam?
I am kind of surprised at how many different ways people have interpreted what I said. I'm frankly feeling a little defeated.
How is that not trolling?
Argueing that something is trolling because it solicits a reaction, or that because it's disruptive it counts as trolling, doesn't make sense. You can't distinguish trolling without knowing someone's motivations. Posts that could be trolling could just as easily be venting, or bringing up a genuine concern that just happens to be contentious, or etc.
Otherwise, flaming people in general is obviously trolling. That's not the way the word trolling has been used historically.
I have thought about this before and I get kind of annoyed when people assume spam is a given exception to free speech. Free speech by itself doesn't imply a limit. So I think you have to carefully design around this issue or just let it happen and maybe it dies off on its own since too much spam chases everyone away and then what's the point of spamming?
edit: drowning out other people's speech may be allowed by total free speech but it's also contrary to the intention I think. I think free speech means that any speech someone wishes to make, they may do so without restriction or censorship by the medium. This isn't a completely satisfying definition though.
Most of the problems that seem to justify prior restraint come from poor source identification. If it's illegal, deal with it through the legal system, after the fact.
Facebook may have had the right idea with their "real names" policy, if they'd stuck to it and required strong authentication.
There is no reason to assume that this theory would fail to be legitimized in courts. Spam is not legal in other arenas, i.e. phone calls & texts.
The fact that the operators of spam bots are difficult to prosecute does not mean what they are doing has legal grounds.
Can clarify what it is this means?
> Spam restrictions aren't generally applied by the government, and therefore don't fall under the constitution.
The "free speech" constitutional amendment stipulates that the government can't restrict speech. It doesn't apply to a mail service provider, which is free to reject whatever it likes.
> The law doesn't require anyone to listen to someone else's speech.
Your freedom to speak to me ends when I decide I don't want to listen to you. I have a right to not listen, and I have a right to reject spam.
> It is not a violation of anyone's rights to discard their emails unread using an automated filter.
I don't know how to say that more clearly; using a spam filter doesn't violate the US constitution. Email would be unuseable without spam filters.
(Any given post of spam, basically just unsolicited peer-to-peer advertising, seems evidently an idea, IMO - it's very much something you are supposed to believe and act on. If it's a volume/repetition/thoughtlessness thing, how's that different from a wave of trolling posts?)
Or what about the threat examples, since you claim all those categories were subjective? You didn't engage with that one, or many of the others. "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat? How about mockery or insults? "Your post is stupid and you are stupid?" Is it subjective that that's insulting? So is it an "idea" or just a particular way of saying I disagree with you? "You are clearly politically motivated and not arguing in good faith?" Maybe that one is actually an idea!
Alright. Spam is when you post something to a space where it's not relevant (not really applicable to Twitter since it doesn't have topical spaces), or post something repeatedly without a good reason.
> "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat?
Those are specific threats, which are not protected as free speech.
> "Your post is stupid and you are stupid?"
Nothing wrong with saying that, aside from the fact that it's not constructive and you'd be better off explaining why you disagree.
> "You are clearly politically motivated and not arguing in good faith?"
Same as previous.
This is very clearly not just about "expressing ideas." There is very real behavior that is designed entirely to hurt other people and is 100% legal that is at the center of this discussion of online moderation.
It's not an idea, though I don't see a particular reason to ban it, unless you have a platform like Saidit that generally encourages constructive discussion over baseless insults (which I consider a great idea).
It's their platform and they want their users to be free in their discourse, they just don't want spam.
Determining those moderators which have been witting participants in the chilling of free speech is more difficult than axing a great many of them that underperformed and keeping the few with real tangible contribution.
Yes, bad things are on the internet.
That is in no way the content or information I'm describing, nor does it fall into the category of legal free speech.
Bringing up illegal things to argument the suppression of legal speech is, I don't know, moronic.
You are perpetuating a tired trope and I haven't the energy to persist with this discussion
This is where you get bent.
Your logical fallacy is detailed here:
I hope that, despite what you are implying, this success by Musk is not upsetting to you.
He bought Twitter to stop that from happening.
So, it's safe to assume, in my opinion, that he fired a bunch of people that care more about their ideology than they care about fair or honest debate or acting in accordance with the principals of freedom or the United States
It's largely cost-cutting for sure, but I also think he believes he needs to start over in a lot of these departments, the entrenched beliefs of how things should be moderated was undoubtedly very deep. I suppose we'll see!
Spam in e-mail is a problem because spam consists of private messages sent to individuals, but public messages can be quickly categorized by a community, and ones own characterization of a message for the purpose of sorting messages for display can be based on a how whether it's chosen for retransission by people who who one oneself has a high trust in.
Imagine having a feed which is initially a horrible spam-filled mess, and then as you encounter things it you like, you upvote that, until your feed is resorted to include mostly quite interesting things. When you again find spam or uninteresting content you reduce the weight of the people who retransmit it.
Moderation always risks being political manipulation, and it's probably too dangerous to democracy to allow it. It's certainly too dangerous to accept foreign moderation (I'm from Europe).
July, 2021: https://archive.ph/VlpYI
> Defending and respecting the user’s voice is one of our core values at Twitter. This value is a two-part commitment to freedom of expression and privacy.
> This is a global commitment, and while grounded in the United States Bill of Rights and the European Convention on Human Rights, it is informed by a number of additional sources including the members of our Trust and Safety Council, relationships with advocates and activists around the globe, and by works such as United Nations Principles on Business and Human Rights.
NATALIE WYNN (the mind behind Control Points, a left wing YouTube channel)
I do think that looking at 8Chan is a pretty good case study in what happens when you create a "okay, let's just let people say anything." People are posting child pornography to this website on a fairly frequent basis.
I think you're right. "On the media" was just talking about how having an "anything goes" policy leads to a place where nobody wants to hang out, where people post illegal stuff even though its technically not allowed. Which I think is valid when Musk has previously said Twitter should allow anything legal.
I guess that's the context for today's free speech discussion.
This isnt cause for concern because they've already caught some longstanding CPU. They are doing a better job now.
Time will tell on this, but currently Musk looks a lot _more_ credible on this than Twitter 1.0. They seemed to have been a lot more focused on repressing wrong-think than dealing with actual criminal behavior. And this isn't just abstract--they helped ruin countless lives.
Twitter previously tolerated this? That’s the first I hear of that. Do you have more info?
That's not a resolved matter by the way. He's still alive, still a convicted rapist, still a fugitive, and still defended and respected by Hollywood-centric media. He continues to evidently be immune to 'cancellation' because... influential people in movie industry like his movies I guess.
My understanding is they made an offensive photoshoot using children but I wasn't aware of any sexual abuse allegations. I'm aware of the court opinion on the blanket and the sexually suggestive stuffed animals. It's inflammatory, sure, but there's definitely a case that it's artistic as well.
I'm pretty sure this is just a more digestible, mainstream version of the Wayfair human trafficking conspiracy theory.
I'm pretty strong on free speech myself. But if you want to repress awful stuff, Musk is not the place to start. And the narrative that he's somehow lowering the level of discourse on Twitter is absurd.
But as a society, can art and images only be taken literally and autobiographically now? That feels like old Christian ways when Jesus could not be depicted (or at least the art history part of my brain thinks of that)
I'm honestly not seeing what's so grotesque about it all? Feels tame as far as fashion stuff goes. Wouldn't bat an eye in the '90s.
What exactly is the issue with his work? That it depicts nude (but not sexualized as far as Incould find) children? And the children aren't real models afaict either.
I like what I see of his work. Reminds of Francis Bacon. Deeply human art. There's something about that kind of figure work that speaks to the soul.
So yeah, not sure why this artist is "the bottom."
That's actually sadism.
I dunno, maybe you just don't get it. Do you know what sadism is?
On the other hand, I've yet to hear a convincing argument as to why it shouldn't bother me other than "oh just get over it it's not that bad", which isn't working with me, since I have no petty bourgeois sensibilities, and I know what I'm doing when it comes to art.
Why should I be cool with sadism?
So it was a child carrying a doll that had BDSM gear. But that's funny cus if the girl was the one in BDSM gear, that would make it wrong? Is that the line?
I personally am not familiar with this photo and don't really want to see it, so yes, your comment could have mislead me.
If you are going to participate in this discussion, you need to keep up. You're attributing malice where there is none. That means you have to see the picture.
I'm not saying you can't say anything, just that I'm going to dismiss you out of hand.
Have you seen Big Daddy? The Adam Sandler film? Is that also on this level? Feels like the same thing. Child actors in an adult piece of media with sexual and violent and otherwise adult themes.
I want to know if you think it's ok if the kid wears BDSM gear. Yes? No?
Given that we're already talking about a hot-button issue which is the subject of conspiracy theories, that seems dangerous.
I do know I hit a nerve with that particular scenario. I got all I wanted. Ok, so let's just move on. Let's get to the fun stuff.
So the line stopped at her wearing the BDSM gear. Why? Why not cross that line? Why is it ok for the bear to wear BDSM gear and for her not to?
Apparently, if it was the kid wearing the BDSM gear, it wouldn't feel like the same thing. It would feel wrong.
Of course, if you, or anyone reading this feels different, speak out.
Moderating dog whistles at-scale is an unsolved problem. Quick, Paul Graham, seize the opportunity to change the world!!
 Weird, not haha.
What's shocking is that apparently Twitter has had a massive CSAM problem for years and Twitter's crack moderation team apparently did very little about it, not even banning hashtags reported to them repeatedly. And none of this got attention until Musk bought Twitter.
[removed section about addition of new reporting option]
I say this as someone who is not in the slightest a Musk fan.
What I find more concerning about this is the way media attention is used offensively. Clearly CSAM has been a problem on Twitter for some time. But only post-Musk is CSAM on Twitter becoming a focus of attention in the media. Is it increasing or decreasing? We're likely to get only fearmongering articles about the T&S team layoffs.
Eliza Bleu aka Eliza Morthland (aka Eliza Siep aka Eliza Cuts aka Eliza Knows) is the daughter of MAGA politician Richard Morthland and her "Bleu" trafficking advocate persona is an act. She's a former American Idol contestant, ex-gf of MyChemicalRomance's Gerard Way, associate of child molester Jeffree Star, & fundraising partner of convicted child rapist and conservative spokeswoman Felecia Shareese Killings. Bleu also coordinates with Mike "Who cares about rape? I don't!" Cernovich, one of the pizzagate amplifiers (and the man who got James Gunn fired), and is amplified by QANon rags like The Epoch Times. Bleu began her podcasting career by interviewing and platforming Tara Reade, the former aide of President Biden who was caught fabricating claims of sexual abuse.
She is, to be blunt, an incredibly proficient grifter and propagandist who specializes in weaponizing the topic of sex crimes.
Eliza has been a speaker at various Tesla events for years, and works hand in hand with Teslarati, an Elon Musk propaganda news site. They are the original sources of the initial bogus claim that Musk had moved against CSAM on Twitter.
When Bleu claims Twitter had a massive problem with CSAM and that the former Twitter execs did nothing, she is actually telling the truth. The platform relies too heavily on pornography for user retention, and it didn't/doesn't have the manpower or tech to filter out (at scale) the massive amounts of underaged porn and CSAM that comes with being a major adult content platform. Instead of prioritizing child safety and nuking all the porn on the site like Tumblr was forced to do by Apple, the prior admin opted instead to bury the issue even as it continued to grow into a massive albeit invisible problem. They deserve every criticism and attack Eliza Bleu has lobbed at them, regardless of her own actions.
Now Musk finds himself in the exact same bind. Twitter (still) needs porn to hold the site together, yet it's (still) thoroughly infested by CSAM threat agents. It's a tricky situation, and the wrong move sees Apple giving Twitter the Tumblr treatment.
So Eliza Bleu has been tapped to control the narrative to prevent this by using her survivor persona as a platform to convince everyone that Twitter 2.0 has actually addressed that awful CSAM problem. She's the figurative Iraqi minister hands up before the mics.
Observe her reactions to the Forbes article that cites Carolina Christofoletti (an actual researcher, academic and CSAM expert who is respected in the infosec and OSINT communities). Christofoletti explains that the problem was never addressed at all, that CSAM hashtags are a ridiculous focus of attention, and that the situation has gotten worse. Bleu immediately goes into PR mode, attacking Forbes and Christofoletti in a profanity laden tweet accusing them of having an agenda.
Not the behavior of someone actually concerned about child porn, is it?
Any reporting or research that contradicts the narrative she is molding is immediately a danger to her task. Bleu has attacked every other in depth reporting on this issue since her initial tweet claiming that Elon had purged most of the CSAM from Twitter. She is on damage control; Her original claims were misleading, and now she will progressively be on the defensive as more of us in OSINT and the media call her lies into question.
Ironically, Eliza's current role makes her one of the biggest protectors of CSAM users on Twitter. Nothing helps them more than Bleu desperately attempting to propagate the narrative that they've been booted off the network when the reality is that they're surging well beyond anything anyone can imagine.
Keep watching her. You'll see the cracks.
False. Do queries about this issue prior to Musk's takeover.
-John Adams, letter to Thomas Jefferson, 1817
If you read Free Speech: A History from Socrates to Social Media you'll find this "irony" repeated throughout history. It's a problem of human nature, and that's what ideals arduously seek to overcome.
Like I think “corporations are people too, my friend” but corporate moderation as a form of protects speech or association takes Citizens United to the next level.
First Amendment lawyers seem mostly† to be dunking on the idea that there is anything controversial about the protection Twitter enjoys.
† maybe "entirely" is the right word here
The rationale behind regulating Twitter would be the same as the ones behind campaign finance laws: to keep a big corporation from using its power to influence elections. Except the difference is that producing a political movie is clear political speech, while the decision to delete particular items from a firehose of user generated content doesn’t seem to be expressing any message in the part of Twitter itself. As far as I can tell, the Twitterati are not addressing the implications of the “must carry” line of cases like TBS v. FCC which hold that forcing a corporation that provides a pipe for content to carry particular types of content isn’t a first amendment violation.
The Twitter internal emails confirm that folks inside Twitter weren’t treating suppression of the Hunter Biden news as a political statement on the part of Twitter. They were concerned about the potential impact on the election.
In any case, you can get online somewhere to say what you want. Use Cloudflare. The market surrounding the matter of publishing stuff online is healthy, and the nature of the internet/web is such that you can reach a global audience with relatively minor equipment.
On the consumer side, what is the argument? There is no lack of outlets, serving every niche, freely available on the same connection that delivers you Twitter. What's the problem? That people can't look away from Twitter?
It's hard to call out hypocrisy without first establishing some coherent principles. I'm not sure what those might be for any of the major political or legal factions. For example, how do we categorize and differentiate health care relationships, for when the government might want to dictate which pamphlets a provider must make available in their waiting rooms or which warnings must be placed on a label. It's roughly similar to the situation with social media companies in the sense of commercial entities mediating private relations, but I suspect a substantial number would find themselves on the opposite side of any hard line drawing.
To me this is basic 1st amendment stuff. We've gone pretty far down the road to authoritarianism when people think we need to protect the leader of the government's ability to force media companies to carry his messages.
The question is whether the government can require a platform to host otherwise legal content on politically non-discriminatory terms. Put differently, it’s about whether platform owners can use their market power over what’s essentially infrastructure to distort the country’s political debate.
Insofar as many folks on the left believe that corporations don’t have free speech rights period, it’s not “basic 1st amendment stuff.” And even for folks who think Citizens United was correctly decided, a company producing a political movie in its own name is much closer to core political speech than the moderation decisions of a user-generated content platform.
The central issue here is whether government officials can override the moderation policies of these companies.
The idea that twitter and other social media companies are just neutral platforms doesn't make any sense. Moderation and promotion of messages based on the content of the message has always been central to their business.
Also, the idea that you can just call a company "a platform" and then the government can just start directing the content of its media operations is rather alarming.
> Insofar as many folks on the left believe that corporations don’t have free speech rights period
To the extent the first amendment has anything to do with free speech anymore, they do. "The press" would have to be understood as an organization, not individual people. I don't know much about your far left people, though.
The issue isn’t neutrality, but rather expression. The First Amendment protect’s a person’s expression. It protects articles in the New York Times because those articles carry the imprimatur of the Times. When an article is published, the Times is the one speaking.
Twitter may or may not purport to be neutral, but it does not claim that the content on its platform is its own speech. Nor does it even claim that it’s moderation expresses any message from Twitter. Indeed, if they banned Trump because they hate Trump, and were willing to stand behind that position, that might well be protected speech.
But as you acknowledge, Twitter’s moderation and content promotion is just part of their business. They promote content they think users will like and moderate content they think users won’t. It’s like Google’s search results. Could the government prohibit Google from promoting its own products first and hiding the sites of its competitors in search results? Almost certainly.
> Also, the idea that you can just call a company "a platform" and then the government can just start directing the content of its media operations is rather alarming.
The difference between a “media outlet” and a “content platform” really isn’t a fine one, or at least Twitter isn’t anywhere close to the line. Is the company purporting to communicate a message? The New York Times is. CNN is. Twitter is not.
If that were true it would create a loophole so big that the first amendment would practically not exist. The President could require the New York Times to carry a daily editorial on the front page from the desk of the president, with just with an additional caveat "This editorial does not reflect the opinions of the NYT". The President could require TV stations to broadcast speeches from the president speaking in the Rose Garden.
> Could the government prohibit Google from promoting its own products first and hiding the sites of its competitors in search results? Almost certainly.
You're just pointing out that the free speech guarantees of the first amendment aren't absolute, which isn't in dispute as far as I know. It doesn't follow though, that therefore the first amendment allows the government officials to dictate any content they want to be published by media companies.
> Is the company purporting to communicate a message?
Well, when twitter adds a fact-check to a tweet, they are obviously communicating a message.
If the government is, in effect, simply hijacking a private communications medium to carry a message that's clearly from a different speaker, that's permissible. In TBS v. FCC, the Supreme Court held that cable companies can be forced to carry local channels: https://supreme.justia.com/cases/federal/us/520/180/#tab-opi.... It found the burden on "cable providers editorial discretion in creating programming packages" to be insufficient to raise a first amendment problem.
Under the reasoning of TBS, Twitter has even less of a basis for claiming first amendment protection. Cable companies exercise "editorial discretion" in curating channels to fit into a limited number of available slots. Twitter doesn't need to do that and doesn't purport to do that.
> You're just pointing out that the free speech guarantees of the first amendment aren't absolute, which isn't in dispute as far as I know.
Not just that, but the guarantees are greater or less depending on the type of speech. The protection is very low for commercial speech that isn't purporting to carry any sort of message.
This is so true. The problem we have now is that online platforms are preventing us from making those choices for ourselves. We're being told we're not allowed to talk to certain types of people, often for ideological reasons.
Online platforms shouldn't limit our choices, they should empower us to find whatever content we want and block/remove content we aren't interested in.
Twitter has been doing exactly what you pointed out for the last several years, shaping public discourse and directing political narratives
Most prominently, his public justification for keeping Alex Jones off Twitter is that it's personal for him. Which is just fine with me mind you, but doesn't constitute any kind of "free speech enthusiasm".
I really don't know if that's something you or others might know about or consider deleterious to the operation of a free nation. You might be a proponent of those messages and opposed to the messages that previously had been systematically removed from the platform.
The problem is that allowing these things to occur will ultimately be bad for you as well..
I don't give a fuck about Alex Jones
§ 230 does not require neutrality, and such a requirement would defeat its purpose.
Do you admit this is a problem, but you just don't see a solution compatible with the 1st amendment's freedom of association? Do you support the Civil Rights Act's restriction of freedom based on race and/or think it is constitutionally valid? Why would the same logic not hold for restricting companies such as Twitter then?
 Spare me rebuttals of "if they really wanted to communicate, they could do so by carrier pigeon!" - 99% of users won't go through with such effort, or even know who they are being herded away from. The remaining motivated 1% is too small to have any political power, and so the censor wins.
Sure, just as common carrier regulation is constitutionally valid. The Constitution isn't a suicide pact; the Founding Fathers very clearly did not intend it to be one. We've accepted a non-literal wording of the First and other amendments since the beginning.
> Why would the same logic not hold for restricting companies such as Twitter then?
Because being kicked off Twitter is hardly the same as not being able to dial 911 or purchase critical services. We've passed laws to correct specific, significant harms that are nothing like being unable to tweet. We weighed First Amendment rights against the rights of those being harmed in these situations and had to decide which conflicting rights mattered more.
That same process happens here. Different situation, different consequences, different decision.
> Spare me rebuttals of "if they really wanted to communicate, they could do so by carrier pigeon!" - 99% of users won't go through with such effort, or even know who they are being herded away from.
No need for a rebuttal. That's fine.
The Civil Rights Act also prevents Twitter (or any company) from banning users based on protected characteristics (e.g. race and sex). It is not remotely limited to critical services or common carriers.
Examples: power company, phone company, ISP.
People are starting to view Twitter et al as similar to phone companies — and hence think they should be bound by common carrier rules.
These actions that would be explicitly forbidden for a common carrier to perform; as such, we can pretty conclusively state Congress never intended to apply common carrier status to places like Twitter.
I didn’t say the current law was that way. Twitter didn’t exist (nor did social networks) when they law you’re citing was written - so it’s quite possible Congress didn’t contemplate this case or express any intent at all.
That would be a good reason to change the law:
If the way it is written doesn’t match how it should apply because the case in question wasn’t contemplated at the time.
Twitter doesn't have monopoly power like phone companies or railroads tend to, nor are the consequences as impactful. There's zero impact on your ability to publish your personal free speech from being kicked off Twitter.
Although the definition of a common carrier isn't necessarily that it is essential anyway. It's that it is offered explicitly as being available to the general public, not involving any sort of individual contacts between carrier and users. Everybody gets exactly the same deal. In that sense, Twitter arguably qualifies, but being a common carrier doesn't mean you can't ban people. Taxis and airlines are common carriers and can absolutely ban you if you don't follow their rules.
The presidents and major politicians of many nations communicate via Twitter.
Major businesses, eg Gooogle and its subsidiary YouTube, only respond to customer service via Twitter, eg improper account bans.
There are some businesses, eg YouTube content creators, who you can only contact through Twitter; and some businesses, eg news agencies, which primarily source their content from Twitter.
- - - - -
> Everybody gets exactly the same deal. In that sense, Twitter arguably qualifies, but being a common carrier doesn't mean you can't ban people. Taxis and airlines are common carriers and can absolutely ban you if you don't follow their rules.
The difference comes when Twitter bans the NY Post from receiving the same treatment because they posted a true story that was politically inconvenient for Twitter leadership during an election.
There is more to it than a common carrier can “ban you for not following their rules”. Common carriers have limits on their rules:
A taxi service can’t refuse to take you to a political rally because they don’t like your politics; nor a phone company disconnect you because they don’t like what you’re saying.
Rules are enforced by imperfect detectors of infractions because there is no such thing as a perfect detector. No matter how careful a review process is, failures will always happen. Innocent people are convicted by real courts through decades of appeals in spite of all the due process. You will always be able to find some instances where a member of party A does exactly the same thing as a member of party B, but only one of them gets punished. Does this mean we should try the best we can and accept that we will fail a non-zero number of times, or should we never have rules instead because that's the only possible way to be fair?
I'm going to be perfectly honest here. I don't follow Twitter at all. I've never had an account. I haven't read an article from the NY Post that I can remember in probably 20 years. I don't actually care about this stuff and think it's silly internet drama that should not be dominating national headlines. So it's entirely possible Twitter really was biased here. I don't know. Organizations are not never in the wrong. When black people were banned from every lunch counter in 15 states, it wasn't an innocent mistake by organizations trying their best to exercise their right to freedom of association. When communists were purged from Hollywood, they were definitely targeted because of their party, not because they'd explicitly done anything.
But without stooping to the specifics of whether one single incident was right or wrong, I would like to agree that it is okay for a service like Twitter to have terms of service and remove content and block accounts that breech those terms. Legally, at least right now, they are not treated as a common carrier, but if that ever changes, one of those terms of service can't be "no Republicans," but that also doesn't mean no Republicans can ever be banned. If you think you can prove disparate impact in a court of law, have at it, but just be aware that you're in the ample company of every political partisan ever who believed themselves to be some unjustly persecuted minority being suppressed for just telling the truth. The fact that some small number really were doesn't do much to impact the overall probability distribution of holding that belief.
What do you mean by some number of government constraints?
In my opinion, the fact that speech online has this dual nature is why there is so much debate about it. Here's a mock interaction...
Individual: "You shadowbanned me. Why?"
Platform: "We don't want to publish you 1 billion times on the internet"
Individual: "Then don't be an editor. Don't give people special treatment."
Platform: "We tried that at first; it did not go well. I can make my platform however I want to."
Individual: "But there are only handful of people making choices that affect billions."
Platform: "The vast majority of these people are not banned or censored"
Etc etc, the debate never ends, because the two sides have opposing financial interests and political opinions/values.
How does this argument fare when it comes to email? Should I expect gmail to inspect my messages and not deliver them if they don't like what I'm talking about?
Most of the controversial decisions by platforms have been about people with millions of followers. I also exaggerate when saying "billions"; it can be thousands.
I think email is different because it is a much more passive technology. It doesn't have a news feed, just things like spam filters, etc
Worth noting that Twitter just announced that users will see more tweets from people they don’t follow: https://techcrunch.com/2022/11/30/twitter-recommended-tweets...
We can accept that social media platforms can engage in censorship. But net neutrality needs to be restored so ISPs can’t disrupt legal traffic.
(Note: The article in the image is fake, hence the word "hypothetical" in the tweet.)
If I don't want to associate with someone I can unfollow them, unsubscribe from their subreddit, block them etc.
What matters is that it's my choice to do that. We shouldn't have that choice made for us by the people running online platforms.
The bailey: getting sites removed from the Internet.
So me and my buddy Tim are going to come to your house and talk about how much you suck.
Autoexec: But this is MY private property.
So yes, So are online platforms. They can kick you out at any time they want and you can pound sand. If you don't like it set up own server online to freely associate. You don't get to demand their resources. You don't get access to their network.
My house isn't a global social media platform. It's a private residence. Different rules. It's not that small private online groups shouldn't exist either, but we absolutely should expect major social media platforms to uphold free speech ideals and allow people to speak without excessive censorship. We should seek out and support platforms that do this and shun those that do not.
I have no need for another gatekeeper deciding for me what I'm allowed to see and hear based on their personal ideology. A platform like that isn't serving me. If someone wants to create platform as a service for others, that platform should empower the users it serves to find and engage with the content they want and block/ignore what they aren't interested in.
I think morally, social media platforms should work to stomp out nation-state disinformation campaigns, tell professional trolls to fuck off, and can spam and illegal content. But as opposed to you, I'm not convinced they should be legally required to do anything but the last item on that list.
> If someone wants to create platform as a service for others, that platform should empower the users it serves to find and engage with the content they want and block/ignore what they aren't interested in.
If someone wants to create a platform as a service for others, they can do whatever the hell they want.
Not a conservative, but I support free speech and I'd definitely like such an instance. What point are you trying to make?
Basically, I am saying conservatives have conflicting views on the rights of businesses to conduct themselves when they try to say Twitter must let them say nazis are cool and vaccines don't work.
If you think about this for a bit you'll realize how impossible this actually is. The professional chaos monkeys are going to play both sides so you always loose.
You again want access to someone else's network and software you have absolutely zero right to. You use these services under surveillance capitalism without paying directly yourself and make demands of them. Their platform owes you less than nothing and we see this codified in the TOS's that state exactly that.
Disconnect from them completely and take their power away. Run your own services and control what is said on your own network. Demanding that someone else do it is insanity.
Well, for starters, there are laws where I live against billions of people showing up in my home even though billions of people show up on social media platforms. the rules for residential housing and social media platforms are different just as their intended uses are different.
> Social media does not make sense globally.
And yet here we are... on a global social media platform. It somehow works.
> You cannot appease both the US government, conglomerate of EU governments, and the Chinese government without being in conflict of what one of them wants.
social media platforms shouldn't concern themselves with what every government on earth wants. An online service should (generally speaking anyway, this is obviously an oversimplification) only worry about the law says in the country they're located in and it should fall on users to make sure what they post doesn't violate local laws. Some governments won't like that and may block your platform but that's a problem for that country's citizens to sort out with their repressive government.
If every person and service on the internet had to concern itself with the laws of every nation on Earth the internet itself would be impossible.
> You again want access to someone else's network and software you have absolutely zero right to. You use these services under surveillance capitalism without paying directly yourself and make demands of them. Their platform owes you less than nothing and we see this codified in the TOS's that state exactly that.
Within certain legal limitations, a platform has every right to be as restrictive as it wants. It can allow only connections from certain IP ranges. It could only allow people to sign up if they pay them. It can refuse to allow anyone to post anything but the number "8" if it wants to.
What a platform has a legal right to do, and what it ought to do are very very
different things. I'm not arguing that a platform MUST under the law allow anyone to post anything they want. I'm stating that a major social media platform SHOULD allow people to discuss the topics that interest them freely and openly. It SHOULD avoid censoring people for ideological reasons.
We, as users of social media, should make demands of the platforms we use. They exist to serve us. We provide the entirety of their content, which those platforms get and publish without ever paying us directly. It is OUR content that drives the traffic and engagement they see. We should reward platforms that follow free speech ideals and enable us to discuss what we like and we should shun platforms that censor us and limit what we're able to see and do.
> Disconnect from them completely and take their power away. Run your own services and control what is said on your own network.
That's the problem with a global internet isn't it. Let's say I do decide to create my own social media platform. I still need to depend on others to host my domain and my servers. I depend on payment processors to pay for those things. I depend on every other ISP to carry traffic to/from my IP. I depend on protocols and software written by other people. You cannot have a social media platform on the internet without depending on others. At every step in the chain countless people have the power to censor things they don't like. It's better for everyone when they don't. Censorship should be heavily discouraged everywhere, but especially online.
How exactly is this different from you deciding that all the popular kids at school must be friends with you, that churches have to welcome you into their pulpits to preach, and that Random House has to publish the book you wrote?
Neither the world, nor other people, exist to meet your needs. Nor to implement your professed value system.
If we're running with the school/friends analogy the problem we have now is that schools are preventing us from becoming friends with all the popular kids at school. Schools shouldn't control who we are able to be friends with at all. Schools should enable kids to gather together and form friendships with the kids they like, and also allow kids to avoid harassment from kids they aren't interested in without kicking those kids out of school because other kids might want to be their friends.
Social media platforms shouldn't censor what topics we're able to discuss or decide who we can talk to. They should provide a place for users go gather and discuss what they want, while providing a means to unfriend/hide/unsubscribe from things users aren't interested in (while still allowing others to see those things if they want to)
> that churches have to welcome you into their pulpits to preach
If a church puts a giant welcome sign up that invites members to come up to the pulpit to preach, and I find that idea valuable they surely have a right to turn me away for arbitrary reasons, but that probably makes them an asshole and I'd look for another church. A book publisher doesn't have to accept my manuscript, but a world where the only books that can get published are ones that support a certain ideology would be dangerous and undesirable.
Social media platforms can exist to serve the needs of the people, or they can exist to be self-serving. The more platforms that exist to serve the people by providing a space for them to discuss what they like without forcing them to see content they aren't interested in the better off we all are.
We should support social media platforms that exist to meet our needs and we should reject social media platforms that fail to. We, as a people, are best served by social media platforms that respect the ideals of free speech. Online platforms would rather dictate what we're allowed to see and hear, but while that sort of self-serving behavior is common it is also increasingly harmful as the influence of a platform grows and as it becomes increasingly difficult for less repressive alternatives to exist.
Why do you think these are the only two possibilities?
Why can’t a social media company exist to serve the needs of only some people? Or to serve the needs of everyone, while also serving its own needs, in a balance decided by the company’s owners?
What if all of the layer 3 networks got together and decided to not allow you in particular to demand their resources?
As for the actual server resources, you have not established where you have any legal right to them.
How exactly does your server get online without utilizing the private property of some entity?
With that said we typically treat transport (phone lines, network connectivity) different than applications, and absolve the carrier of the traffic from the content of the traffic itself. With non-transport hosting it gets more complicated quickly.
But until the time we adjust our laws (and good luck with that) it's going to be difficult to nullify property rights and freedom of association of the property owners over the users of the services.
But it is an important point not to have just one site like Twitter or Reddit or even HN so we can go to one the competitors and complain about the former without angering the later.
I specifically want the political speech of the people most ideologically opposed to my point of view to be able to make their speech, free from government intrusion on it. I will almost surely not like it, but I will defend vigorously their right to say it.
Neat. But I'm under no obligation to carry that shit in my blog's comment section, social media platform, or any other service. If you want to stand up some service you're under no obligation to carry my speech that you don't agree with either.
Isn't that every country? I'm not American but to my understanding speech calling for direct violence is illegal the US, where Twitter is based.
The question we should be asking is HOW content is being moderated. Shadow moderation, when a forum tricks authors into thinking their removed or demoted content is publicly visible, is an abridgement of free speech culture we should be addressing. I recently gave a talk on this  which led to some discussion on HN . The wider public is generally unaware of the degree to which this happens— to all of us.
I'm pretty sure Twitter already shadow moderates content. My reply here  only shows up when directly linked, not under the parent tweet , and it wasn't hidden by FIRE.
This is openly admitted when platforms say "Free speech but not free reach" as in the case with Musk and Twitter, or when they talk about raising or reducing content as in the case of YouTube .
I wonder if the left will be able to take advantage of Texas' social media law now that leftists accounts are being banned.
The people advancing one arguement pre-Musk might be different to the purple advancing the argument the opposite way, even on the same political side.
But I agree that on aggregate, the same logic is now being applied to form opposite conclusions.
Once you realize that these abuses are inherent to a system that doesn’t culturally accept free speech, then you see the danger of the current moment more clearly.
False. Or, rather, it's false if you want Twitter to maintain its liability shield in Section 230 of the Cojmunications Decency Act , specifically 230(c)(2). Without this, Twitter becomes liable for any content. This is of course US-centric. Different countries have other requirements.
> Musk may not be the best — or most consistent — messenger for free speech. And you may not agree with his interpretation of free speech.
We all know what Musk means when he says "free speech". It's the same as when any conservative says "free speech". It means "hate speech". It means not wanting to get banned for spouting transphobia (in particular), homophobia, racist screeds, misogyny, etc.
> If we care about an America whose support for free expression goes beyond the law, we must support a culture of free expression.
No, we shouldn't. Every time some variant of free speech absolutism has been tried, the results are always the same: it fills up with Nazis. Everyone else leaves. Even 4chan has a ToS (basically "no CP"). That's the place for unhinged hate speech and conspiracy theories.
Platforms don't want to be known as being a Nazi hotbed. Advertisers flee. Beyond that however platforms should consider what's best for the total user base. Allowing a few extremists to spew hate speech in the name of some ideal of free speech culture at the expense of everyone else is narcissism personified.
I'll close with noting the paradox of tolerance .
Society is changing, and people want to discuss it. To be heard.
Yes, that makes change harder and slower.
But you absolutely do not get to choose the "right" solution and call everyone that is concerned and unconvinced "transphobic"
Pretty weird if something can be discussed in legislature, but not on twitter
I don't know that I agree with him. I'd need epidemiological data. But that is not the point. The point is that we, as a society, need to discuss.
It is transphobia to deliberately and knowingly lie that children are getting gender confirmation surgery. Because they aren't.
> To argue about puberty blockers?
Puberty blockers are non-invasive and reversible and given as medical care for harm reduction. Why do you think what medical care someone gets is anyone else's business? More importantly though, such discourse is never in good faith.
> Pretty weird if something can be discussed in legislature, but not on twitter
It's still misinformation and hate speech when it's "discussed" in bad faith by reactionary state legislatures.
> The point is that we, as a society, need to discuss.
First ask yourself why anyone else gets a say in another person's medical care. It's really no one else's business.
Children are being subject to surgery, and puberty blockers are not “non-invasive and reversible”.
This is trivially verifiable and well-documented, including, most recently, in the New York Times; that you’d label objective, obvious truths as “transphobia” demonstrates exactly why we need robust, open discourse.
If this web sites allowed flame wars would we all be better informed on this topic?
There is not one single mention of children being subject to gender-affirming surgery in that article. At all.
You have been continuously repeating this misinformation and have failed to produce one single shred of evidence to suggest it is anything but.
What we can affirm is that: people in slovenia can have sex reassignment surgery at 15, in scotland at 16. In most EU countries, after 18
And the we can ask if those policies are generating more happiness than suffering, or more suffering than happiness.
The NYT article mentions a 16 year old getting a mastectomy, hormones, and then regretting it. I would consider a 16 year old a child, but you might not, and that is fine. The useful question is still: was it too soon? Maybe not! Maybe she is just unlucky, and mastectomies of 14 years old will make the world a better place.
But it is a discussion, and people will not accept that they don't get to ask questions and have opinions.
Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
I.e.: you can remove some content without being considered an editor reponsible for all content
It's also nice to see them attempting to separate "free speech" into two buckets of meaning, the legal and the cultural. It's a point that gets muddled.
However, the article, like a lot of "free speech culture" defenses I see, fails to explain why "free speech culture" has to, essentially, be "one way". The freedom of someone to say something, but without the freedom for someone to speak against it. If the consequence of saying something is a lot of people mocking you for it, that can be, and often is, just as chilling as any specific action.
It also fails to discuss, at all, how actions and speech are somehow distinct things. If you're saying something I disagree with, am I morally obligated, within "free speech culture" to sit there and hear you say it? Am I morally obligated, within "free speech culture", to support businesses which publicly say things I disagree with, or things which specifically target me, or my family? If not, isn't then the monetary consequence of "free speech" potentially chilling?
And if people should/must be free to speak against speech they disagree with, and if people should/must be free to deny business to businesses they disagree with, then isn't the "free speech culture" defense just a disagreement over whether someone, or some group, is right in the speech they use, and actions they take? The argument isn't about broader principles.
Elon Musk took over Twitter and disagreed with speech, and actions, the previous owners took. He reversed course. He's "free" to do so. He also took issue with speech and actions the previous owners didn't take, and banned accounts whose speech he disagreed with. He's free to do so.
Individuals are free to speak against that. Individuals are free to take their business elsewhere because of that.
That is, as far as I can tell, what a "free speech culture" should/must mean.
Finally, I took note that their most compelling "you have to be for 'free speech culture'" cautionary tale in the article was explicitly not about private individuals, or companies, but a government's (CCP) ability to pressure private companies – something which is explicitly rebutted by "free speech law" as bounded by the author.
I see the problem as:
* disproportionate justice such as dogpiling
* trying to get people fired, by applying pressure on employers
* trying to de-platform, e.g. removing accounts and websites
If it was just replies, there wouldn't be a problem.
So far what I've been hearing is that the ad industry considered Twitter to be a bad place to advertise to start with.
Then Musk came in. First thing he does is to shout that there's too many bots on the platform, which I'm sure is just the thing one wants to hear when advertising.
Then he fired a lot of people, which seems means that Twitter is now hard to advertise on anyway, because internal systems don't perform well anymore and people used to talk to got laid off. And Musk is heaping in extra controversies on top.
Musk is simply incompetent at running this particular business.
Yes, firing deadweight might help. Doing it immediately, before figuring out who's dead weight and who is not, that was the stupid part.
See here, for one discussion: https://www.nytimes.com/roomfordebate/2015/04/16/what-are-co...
Not that it makes him wrong, but citing yourself is not a great look, IMO.
I appreciate the link.
As to the specific matter here, Twitter is now a privately held corporation. It has no such fiduciary duty. The lenders can presumably call their loans if they think Musk will bankrupt the company. Musk can legitimately believe that his vision regarding free speech will maximize the company's value, and he could be right or wrong, or he could be making it all up as he goes and not be sincere about anything, and he gets to. I'm not a mind reader, so I won't hazard a guess as to what he thinks about freedom of speech and profitability.
Free speech only makes it harder for people to tell lies with impunity since people are then free to discuss your lies and call them out for what they are.
"Sunlight is the best disinfectant". By all means, get those lies out in the open so we can publicly tear them to shreds, even better when online platforms are themselves fact checking posts.
There is also reason to believe that governments and corporations may prefer that people aren't very capable in this area.
I think it's far better to give people the tools to work out what is true/false and to debate difficult topics (even those without a clear answer) in a transparent manner.
I'd agree that both governments and corporations would be thrilled at the idea of a population that has no choice but to accept whatever they are told, but I think that as a people we're better served when we have the option to develop the skills to think critically and online platforms can play an important role in that.
I propose we try something similar to what we did when there was an undesirably low level of literacy in society: mass education. But in this case, the subject would be philosophy.
> Do we design communication systems with the assumption that all people are incapable of telling fact from fiction just to make it easier for those who can't?
I would say yes, but I would drop the "just to make it easier for those who can't" and replace it with something like "because an undesirably low level of people in society have substantial skill in logic, epistemology, rhetoric, etc, and it is at least plausible that this state of affairs could have negative consequences, including with regard to 'existential' problems like climate change or the preservation of 'democracy'".
A big problem is that people tend to have pretty strong beliefs about their capabilities in any given domain, and the source of this confidence is very often substantially based on intuitive self-assessment, the output of which is a function of the very skills in question.
> I think it's far better to give people the tools to work out what is true/false and to debate difficult topics (even those without a clear answer) in a transparent manner.
100% agree. Though, we already have tools that could support that activity (HN is one such example), but they currently have no means of insisting that people do it (unlike in a classroom where unruly/etc students who are downgrading the learning of others can be and are asked to leave, in an adequately skillful way (sufficient to accomplish the goal)).
> I'd agree that both governments and corporations would be thrilled at the idea of a population that has no choice but to accept whatever they are told, but I think that as a people we're better served when we have the option to develop the skills to think critically and online platforms can play an important role in that.
Oh, humanity certainly has this option, it is not prevented by the laws of physics anyways. But having an option available does not guarantee that it will physically manifest - someone has to actually make it happen. Ironically, in the past I've run some of these ideas by moderators here and they....didn't have a lot of (even abstract) interest in the idea....which to me is a sign of...something.
 With luck, perhaps some day some non-trivial/adequate amount of humans on the planet would rise to a level of ability that they would be able to competently and accurately discuss the degree to which our "democracy" is actually democratic, a highly contentious and rather important topic that is absolutely butchered in any conversation I've encountered.
Free speech is another excellent example of a topic where most people (including genuinely smart people) simply lack the training required to discuss competently:
It really can be difficult to keep up with all the lies being told, but automated fact checking can be a great help here, and the more a lie is seen the greater the odds it will be challenged (provided we have the freedom to challenge it)
We've never found anything that works better. For any given idea discussing and examining the positions for and against it is still the best way to get to the truth. The more transparent that process is, the better.
Conspiracy theorists don't care about facts or truth. They will act as they do regardless, but free speech ideals make it very difficult for echo chambers to exist because all ideas can be publicly challenged.
sunlight being disinfectant is not true. stochastic terrorism driven by media hyper engagement is definitely true.
Although it's never had a good one, for any reasonable definition of "alt-right people" this isn't true at all. It's the same racist far-right folk as always, just now online (like we all are).
> now regular assholes are being infected by this stuff
Not really. The idea of racist ideology as an infection is dangerous and simply wrong. You could spend all day every day listening to racist propaganda and hate speech and you'll never suddenly wake up thinking some people are better than others because of race. If everyone who listened to hate speech became mind-controlled into being "radicalized" researchers and anti-racist activists who do follow that stuff would have a massive problem, but it doesn't happen. There are people who are vulnerable to falling in with that sort of crowd, but even then it's not the message that hooks them.
> a huge percentage of the information storm coming at me every day is either angry, an outright lie, a reposting of something hateful or untrue and "sunlight" isnt helping anything.
the internet has enabled easy global communication and some of what we get is malicious and manufactured to deceive, but mostly it's a distorted reflection of what people are feeling in real life. Social media platforms encourage the worst in us, and reward exaggerated anger and extreme inflexible positions. That's largely to do with how social media platforms are designed and the algorithms that drive engagement. We don't need massive amounts of censorship to solve that problem.
Sunlight is helping!
I've seen people (even on this site) post information only to be corrected by others. I've even seen some of those people acknowledge that they were wrong and start to question their sources!
Having hate groups in massive online communities made it easier than ever to keep an eye on them. It's enabled us to see what lies were being spread and get fact checks published to increase exposure to the truth before many people were exposed to the lies (research has shown it's far more effective to inoculate people with education before exposure to misinformation than it is to get them to change their mind on something after they've been misled).
Because many of the traitors planning to attack the capitol on Jan 6th were on well known social media platforms like Facebook and Parler police and researchers were able to use those posts to identify and prosecute people they wouldn't have been able to otherwise.
It all comes down to this: You can't fight against something you aren't allowed to see. As long as people are able to communicate (online or offline) they're going to spread misinformation. We shouldn't just sweep it under the rug so that we can pretend the problem is solved. We have to confront it directly and openly even when it's uncomfortable.
There are many different metrics that could be considered, so maybe that would be the best place to start if we were interested in having an actually serious discussion on the topic.
Free speech as codified in Bills of Rights is there to stop the Government from censoring people, usually opponents of the current Government.
The idea that you can speak whatever you want is ridiculous. There are many laws in the way of this, Assault, Libel/Slander, Perjury, etc
you definitely can't. even if you mean "you can speak with whom you want who also wants to speak with you"... well freedom of speech kinda comes down to what you'd be talking about. dictatorships don't prevent people speaking because of the people, but because of what they think they will or have said.
No we don't. This essay is trying so hard to say that while de-platforming may not be government censorship, it's just as bad. But it's not. In China right now, people are protesting in the street with blank signs because they aren't allowed to say anything about anything in public, and they are still getting arrested. There's a sort of slippery slope argument given in the article that we shouldn't be headed in that direction. But Twitter's content policies, for example, are in line with the culture and laws of Western nations as a whole. My understanding is some countries have hate speech laws that the US doesn't have. There is no "slippery slope to China," just the US wanting to be a rough-and-tumble outlier. It's the same kind of slippery slope argument that says having "normal" healthcare like other Western countries would make us communist or something.
Social norms are changing; that's not fascism. That hyperbolic metaphor has gotten out of control. We have a republic "if we can keep it," as the article says; losing our democracy to an authoritarian regime permanently (or for a long term) would entail a level of real suffering that is not comparable to (and has nothing to do with) it becoming permanently socially unacceptable to use a racist slur, say, in public discourse. There are no brilliant "ideas" embedded in sheer bigotry that we are missing out on, and giving less airtime to hate speech or misinformation is nothing more or less than that.
At the very least, the article claims to elucidate a "distinction" but actually blurs several things together, such as censorship; "canceling" (which can mean a lot of things but is sometimes just a simple result of public backlash leading to a TV show being canceled, and you can't force people to like some celebrity who committed sexual assault or is racist etc etc and see them the same way as before, or treat having a talk show on TV with advertisers as some kind of fundamental human right); and what ideas are considered worth discussing at academic institutions (which have always had their idiosyncratic preferences about what ideas merit discussion and research, I'm sure).
Maybe they are, but what I've often observed is that a vocal minority of believers are shouting down any and all non-believers.
You're looking at a small mob and thinking that they're representative of all of society.
It always surprises me that people never question the writings of the constitutions as if they were godsend in a monolith.