Twitter recently posted warning labels on some high-profile tweets, on the ground that they incite violence or spread harmful misinformation.  Is this ethical?  Is it obligatory?

To comment on this dilemma, leave a response. For anonymity, omit your email address and website, and use a screen name.

About John Hooker

T. Jerome Holleran Emeritus Professor of Business Ethics and Social Responsibility, Carnegie Mellon University

2 responses »

  1. John Hooker says:

    Let’s clarify the issue first. It’s not whether Twitter should be required to post warning labels, or prevented from doing so. It’s not whether Twitter should delete certain tweets. The issue is whether Twitter should voluntarily label tweets it sees as harmful.

    I will start with the autonomy principle. It says (in part) that if a tweet is certain to cause debilitating harm to at least one person, then it shouldn’t go out—unless anyone harmed has given informed consent to the risk. For our purposes, harm is “certain” when it is irrational to believe it won’t occur.

    A tweet that recommends drinking bleach to protect against Covid-19 violates autonomy, if a few people are certain to try it. We can’t say they give informed consent to the risk, because anyone who drinks bleach is obviously unaware of the danger. A warning label might absolve Twitter of an autonomy violation, because it could tell readers that bleach is poisonous and allow them to make an informed choice. Deleting the tweet is, of course, the surest way to avoid a violation.

    A tweet that will surely incite violence also runs afoul of the autonomy principle. The tweet should normally be deleted because a warning is likely to have little effect.

    When Twitter managers are uncertain whether a tweet will cause debilitating harm, they may still rationally believe that it will probably cause some kind of harm. Sending out the tweet violates the utilitarian principle. An example is the Presidential tweet claiming that mail-in ballots inevitably result in massive fraud. A warning label may satisfy Twitter’s ethical obligation, because it can refute the claim with evidence.

    An obvious problem is that Twitter can’t screen 350,000 tweets per minute for harmful misinformation. But ethics only requires Twitter to do its best. It can screen tweets from high-profile individuals. It can develop AI methods for flagging tweets that are likely to be objectionable. It can formulate a well-reasoned and transparent policy, based on ethical principles, that its staff can follow when vetting flagged tweets.

    The dominant social media sites have failed to meet these obligations. They want to profit from harvested personal data without taking responsibility for their worldwide influence. Facebook, for example, has become a medium for vicious rumors, destructive conspiracy theories, and oppressive government surveillance. When it comes to social media, we are still in the Robber Baron age.

    Facebook CEO Mark Zuckerberg argues that social media should not presume to be arbiters of truth. He has a point, and the generalization principle underlies his argument. Censorship in service of truth is unethical if its universal practice would prevent the free exchange of ideas and end up suppressing truth. Yet this argument doesn’t apply to warning labels. They don’t suppress the exchange of ideas. If anything, they promote it.

    Even responsible censorship would be generalizable if there were a variety of online outlets, as there was once a variety of newspapers. Opinions rejected at one site could be submitted to another, more friendly venue. Some may remember how newspapers once vetted letters to the editor for responsible views. Unfortunately, the online world is a winner-take-all market, dominated by Google (owner of YouTube), Twitter, and Facebook (owner of Instagram and WhatsApp).* They can single-handedly shape the exchange of information, and In our polarized society, anything they do will bring screams of political bias.

    In this acid environment, warning labels on questionable fact claims are one way that Twitter and other sites can meet their ethical obligations. They can also delete posts that could incite violence or will inevitably cause harm. These actions must be backed by a carefully formulated, transparent, and ethically grounded policy.

    As for the long run, let’s hope that a wide variety of responsible information outlets and social media platforms evolve alongside the dominant ones, which have been slow to live up to their responsibilities.

    * Even TikTok (Douyin in China) is owned by the Chinese Internet giant ByteDance.

    Like

    • Michael Bleier says:

      The recent events in DC, where my wife and I worked for the US Government, we found them to be very upsetting and dangerous. The incendiary comments by DFT and Rudy Giuliani are very much akin to “yelling fire in a movie theater,” which we know is wrong, dangerous and inciting to violence.

      Like

Leave a comment