111 K Street NE
Washington, DC 20002
- Toll Free 1.888.564.6273
- Local 202.783.3870
Living in the age of the Internet has many pretty cool upsides, such as having basically the entire accumulated knowledge of the world accessible via the handheld glass rectangles everyone stares at all day. But what to do when some of that information is demonstrably false, or worse, deliberately misleading? Fake news is hardly an Internet innovation but the incredible speed at which it can disseminate is, and now the debate rages about whether it’s proper for platforms to allow propaganda to propagate on their property.
With the 2020 elections coming up fast (did the election cycle ever really end?), the big online content platforms are feeling to burn to get ahead of the next round of accusations that they are facilitating electoral manipulation. On the one hand, this week Facebook’s Mark Zuckerberg acknowledged calls to police “fake news” and political advertising, but declined the idea of banning political ads altogether. On the other, Twitter’s CEO Jack Dorseycame out just days later and promised to entirely ban paid political advertising on Twitter.
Mr. Dorsey’s sentiment certainly resonates, that “political message reach should be earned, not bought.” But whether he realizes it yet or not, Twitter’s new stance will inevitably open up another whole can of worms, because what, precisely, is a “political” message? Who defines what separates political speech from other issues that groups might run ads about?
In contrast, Mr. Zuckerberg and his sometimes-nemesis, Sen. Ted Cruz, are both seemingly in agreement about defending free speech online. You might guess it was Mr. Cruz who said that: “… political ads are an important part of voice — especially for local candidates, up-and-coming challengers and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.”
But in fact Mr. Zuckerberg said it just last month. Now, some may assume that his incentive is financial, as Facebook’s platform is far more conducive to effective political advertising and it’s a profitable activity for them to allow. But he’s also right on the principle — without the ability to boost their message in a crowded market, how is a start-up candidate or issue group supposed to get noticed?
At the same time, it’s easy to see the appeal (beyond just taking a shot at Facebook) for Mr. Dorsey to just throw his hands in the air and eliminate political advertising on his platform. Too many in the public and the media seem to have assumed that their fellow social media consumers are mindless pawns, helplessly controlled and manipulated by the dark, sinister forces of false online ads and memes. Disinformation, they say, must be policed aggressively for the sake of our democracy.
But responding to this demand puts social media platforms in a content-moderation paradox. Many of their customers are demanding that platforms try to weed out disinformation, however that’s defined. And yet, many of those same people get angry when that moderation seems biased in any way.
Subjective decisions must be made at every level about what constitutes “fake news” and every one of these decisions has the potential to introduce real or perceived bias to what content users are able to see in their feed. Is a given ad factually misleading, or is it reported as false merely because someone disagrees with its premise? Is it slanderous, or an inconvenient truth? Even a truly unbiased moderator (and I doubt such a person exists) would have difficulty answering those questions consistently in every circumstance. Algorithms, certainly, lack the nuance and context to fill that void.
As Mr. Zuckerberg appears to finally be learning, attempting to act as the arbiter of truth in political speech online is a not only a thankless task but probably a futile one. Whether Facebook or Twitter police their political content or not, politicians and users alike get angry and want to regulate them.
This is a dangerous trend, because forcing platforms to be “neutral” or placing legal guidelines on how they should police political speech would effectively let the government decide what truth is online. Nothing would be more dangerous to freedom of expression than that.
Jack Dorsey’s decision to involve Twitter in deciding whose ads are political and who are not will continue to generate that kind of blowback, and will mute the voices of many people and causes in the process. Ironically, this plays right into Facebook’s hands, as they can trumpet the fact that they are allowing a far greater degree of freedom of expression by letting the users decide whether political ads are worth listening to.
In any case, hopefully last week’s speech by Mr. Zuckerberg is the vision that wins out — one that trusts us to be our own best moderators rather than expecting either Silicon Valley or Washington, D.C., to do it all for us.