There is a conflation that often occurs in free speech discussions, i.e., legal free speech protection vs. the value of free speech in general. This is central to the recent discussions of the rights and obligations of social media platforms like Twitter, reddit, youtube and Facebook to police the speech of those participating on the platform.
Let's start by noting that the first amendment (1A) does not compel these platforms to publish anything. These are private companies. The 1A protects a citizen's right to make statements and hold opinions, it does not compel any other entities to use their resources to support such statements. If anything, the 1A simply protects these companies from being compelled to support such statements on their sites.
Nonetheless, insofar as these platforms have become the de facto public square, people should be concerned about the extent to which such platforms police speech. The arguments for the importance and utility of the 1A for a thriving democracy apply to some extent to these major internet platforms. If a free and open exchange of ideas is important in the public square, there is some value to ensuring that a free and open exchange of ideas can occur in what has become the central place in which ideas can be exchanged. Furthermore, there is a legitimate concern in insisting or allowing that agents at a major company become the actual arbiters of what one can legitimately say in the public square. This is not because the 1A compels them to allow any and all speech, but because we think that open exchange of ideas is useful and/or because we do not want people policing central speech platforms to be driven by obligations to shareholders rather than general democratic obligations.
These issues become conflated. So it is worth underscoring that appeals to free speech on social media platforms are not refuted by the simple observation that the 1A does not apply to them. One may want to minimize the policing of speech on social media platforms without believing or assuming that the 1A compels us to do so.
These issues become conflated. So it is worth underscoring that appeals to free speech on social media platforms are not refuted by the simple observation that the 1A does not apply to them. One may want to minimize the policing of speech on social media platforms without believing or assuming that the 1A compels us to do so.
The simplest solution is to simply dig in our heels and argue that whatever is protected by the 1A should be allowed on these major platforms. But this does not work very well in practice for one simple reason, internet anonymity. Anonymity results in people making statements not necessarily because they believe they are true and stand behind them, but for a whole range of other reasons, personal vendettas, the thrill of generating a reaction, boredom or whatever. The reasons for supporting free speech don't clearly apply to anonymous speech. We support free speech, presumably, because we see value in keeping the parameters of discussion open so that we don't shut out useful and important ideas. But preserving anonymity does little to encourage the open exchange of ideas, it degrades discussion into personal attacks and outrageous unsupportable claims, most of which do almost nothing to keeping the parameters of discussion open and are more than likely actually antithetical to realizing this.
However, even setting anonymity aside, I do not think that these private companies should be compelled to allow all speech protected by the first amendment. It is fairly clear that blatant hate speech and utterly false claims are not useful contributions in the free and open exchange of ideas even if making them fully illegal is impractical or otherwise undesirable. I suspect there are some things we can do to protect robust speech on these platforms without simply entrusting to Facebook the difficult task of dictating what people can legitimately say, a few guiding principles:
- Make the guidelines for legitimate speech on any platform clear and ensure they are consistently applied. If hate speech is illegal, all hate speech is illegal and articulate clear examples of what counts as hate speech, including a full range of edge cases. Publicize these, make it clear to people what kinds of speech they will not be hearing or articulating on the platform.
- Have independent arbiters make the decisions as to what does and does not count as legitimate speech. These should be truly independent arbiters, paid by some independent organization. How does this get funded? I don't know, possibly via taxes, but every attempt should be made that it's maximally independent from both the companies and government officials.
- Allow for tighter or looser controls based on the degree of anonymity under which a user is operating and one's history of operating in good faith. A person putting their name and reputation on the line legitimately gets a lot more latitude in making offensive and controversial statements than does @Trump4Ever4550 with 3 followers and a one day old account.