What I'm Reading (Jan. 21)

I admit I wandered down the rabbit hole on deplatforming free speech with three articles, all with different viewpoints. And then a piece on vaccinations, it is not about central control as much as centralized communication.

“My first and ongoing feeling when I hear that Donald Trump has been banned from Twitter and Facebook and other social media is relief. I have the same feeling when I see Apple and Google and Amazon putting the screws to Parler, which was created to be a home for conspiracy theorists, fantasists, liars, and sowers of hatred.

But then I think of this comment from a recent essay by Cory Doctorow: “The one entity Facebook will never, ever protect you from is Facebook.”

While my knee-jerk reaction was to be happy that President Trump had been banned from Twitter, I did begin to wonder at the value of “putting the screws” to Parler and others – you know, in a kind of, why drive them underground, better to have them in the disinfectant of sunshine way. We should be careful about what we wish for, especially when it comes to the companies channeling our new inter-connectedness. From the Hedgehog Review, Our Manorial Elite

I found another piece, this from the Brookings Institute, that follows a slightly different view.

“When algorithms make decisions about which incoming content to select and to whom it is sent, the machines are making a protected editorial decision. Unlike the editorial decisions of traditional media whose editorial decisions are publicly announced in print or on screen and uniformly seen by everyone, the platforms’ determinations are secret: neither publicly announced nor uniformly available.”

I would argue that while both protected and decision are the correct word choice, editorial is not. Social media companies are not optimizing content; they optimize attention because that allows them to serve up more advertising that may distract us into purchases. Perhaps a better solution is that the algorithms are transparent, or maybe we should be able to opt-in to having their algorithms provide us with choices. I say opt-in because there is great power in default choices, so a default of opt-out means we are protected unless we explicitly want the algorithm’s advice. From the Brookings Institute, The consequences of social media’s giant experiment

And another along the same lines

“Rather than seeing the polarization and monopoly problems in isolation, it helps to recognize that the cause of both is a policy framework that has shaped the internet into a series of dominant platforms who radicalize their users. Sure, conservative infrastructure mattered; rioters used Parler as an organizing forum. But more important to their movement were mainstream platforms, like Twitter, Facebook, and YouTube. These services, unlike Parler, made money by selling ads as the riot occurred. Facebook even placed assault weapon ads next to the groups organizing an overthrow of democracy.

The problem gets more interesting when one considers the motivation behind the riots. When the rioters attacked the Capitol, they did so not to destroy democracy but in their minds, to save it. Steeped in a years-old ecosystem of disinformation and rage, most of them sincerely believed they were stopping an election from being stolen…”

From Matt Stoller, Take the Profit Out of Political Violence

Finally, one last article that we can file under “What we have here is a failure to communicate.” As I have written, there must be not necessarily central control for a mass vaccination program to work as much as central communication. ProPublica discusses how are vaccines are being distributed to the states in How Operation Warp Speed Created Vaccination Chaos. I would argue, by the way, that this is not so much a political problem as a bureaucratic one.