Thousands of third-party contract workers have been poring over private conversations recorded by Alexa-enabled devices (for the purpose of improving Alexa’s speech recognition systems). Is this ethical? How can AI software be trained while respecting privacy?

Posed by a journalist.

To comment on this dilemma, leave a response. For anonymity, omit your email address and website, and use a screen name.

About John Hooker

T. Jerome Holleran Professor of Business Ethics and Social Responsibility Tepper School of Business Carnegie Mellon University

One response »

  1. John Hooker says:

    It is not ethical. For one thing, it is deceptive. Deception is almost always ungeneralizable and therefore unethical. Eavesdropping on Alexa users is deceptive because the whole point of Alexa is that it is based on AI. This leads users to believe that when they talk to Alexa, they are talking to a machine, an AI-based system that starts interpreting their speech when it hears a “wake word.” When humans listen in, users are deceived. They would be careful about what they say if they knew that strangers were listening in.

    Perhaps Amazon could avoid deception by alerting users that their Alexa commands may be monitored for research purposes. Of course, the warning would have to be clear and up front, not buried in some lengthy “privacy” notice that no one has time to read. Yet even this risks the violation of autonomy that goes along with constant monitoring and surveillance. Alexa-enabled devices are always “listening,” even if they do not respond unless they her a wake word. Potentially, they could record comments without a wake word, and this apparently happened in some of the AI training sessions. Constant surveillance interferes with human autonomy. This is hardly a new idea, as it goes back two centuries to Jeremy Bentham’s idea of a Panopticon. This was institution in which inmates could be monitored at any time without knowing when they are being watched. The net effect is constant surveillance, much as in today’s online world. Bentham saw it as exercising “power of mind over mind,” and he was right. If we know that strangers could be reading our emails or listening in at any time, but we don’t know when, it is as though they are monitoring us constantly. We cannot be ourselves, which is exactly what denial of autonomy is.

    Amazon can rather easily train its Alexa system without violating ethical norms. I acknowledge that “deep learning” in multilayer neural network does not happen automatically. Human trainers must carefully shape the learning process and tweak the network architecture to achieve the kind of results we hear so much about. So Amazon must involve human trainers to make Alexa work. This can be done simply by asking a subset of users to give Amazon permission to record their Alexa interactions for human analysis, perhaps in exchange for free merchandise. There is no deception, and users know whether are being monitored and can take reasonable precautions. I wish ethical conduct were always so easy.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s