Machines are sometimes designed to deceive us. The button at a crosswalk may have no effect on the signal but only induce pedestrians to pay attention to it. The door closing button in an elevator may be a dummy that gives people a sense of control. The progress bar for computer downloads may only give the impression that the download is progressing. Historically, a phone system that reached the wrong number may have patched the call through anyway to make callers think they made the mistake. The close button at the corner of a pop-up ad may only generate another ad. Netflix may switch from a personalized evaluation to a standard list of pop movies, without notice, when the system is overloaded. The posted wait time at an amusement park queue may be a deliberate overestimate to reduce customer impatience.
Some argue that deception by machines is ethical as long as it is beneficial or at least benign. It has been compared to a magician whose deception is tolerated because it amuses us, or to a doctor who tones down a diagnosis to avoid upsetting the patient. Are these comparisons legitimate?
Based on this article.
To comment on this dilemma, leave a response. For anonymity, omit your email address and website, and use a screen name.
First, it of course is the designer of the machine who deceives (or the manager who decides to use the machine) and not the machine itself. Deception may be unethical even if it is beneficial, for example if it is not generalizable or violates autonomy. Each instance of “machine” deception must be evaluated individually on this basis.
Deception is causing someone to believe something you know is false. Deception usually fails the generalization test if the purpose behind the deception presupposes that people are actually deceived. For example, the fake crosswalk button fails this test, because the purpose of drawing attention to the signal would be defeated if everyone knew the buttons were fake, and they would soon know this if all buttons were in fact fake. The same goes for the elevator button, the progress bar, and the close button on the pop-up. Everyone I have asked about the progress bar already believes it is humbug. So these little tricks are unethical, whether they are beneficial or not.
Magic tricks are generalizable, however, because their purpose is to entertain. They continue to entertain even when everyone knows the magician is tricking us. The queuing time estimate (and even the progress bar) could conceivably be like this. People may be less impatient if the queue is faster than indicated, even if they know the posted time is an overestimate. It is a little like flattery. Flattery makes us feel good, up to a point, even when we know it is flattery. So a certain amount of flattery is ethical, if its purpose is simply to make people feel good. The doctor’s benevolent deception may sometimes fall into this category, but one must be very careful about this.
Sometimes deception seems generalizable only because the reasons for the act are drawn too narrowly. For example, the phone system deception may well continue to deceive even if all phone systems employ it (and perhaps they actually did). But one must ask why the phone company wants to deceive customers. If the reason is simply that it is convenient for the company, then this is not generalizable, because if companies always deceived customers when it is convenient to do so, customers would assume the company is pulling a fast one whenever something goes wrong. People in some countries already assume this about their governments, due to past deception. The phone company can give a more specific reason that is generalizable; for example, it deceives for the specific reason of inducing customers to believe that the switching system is reliable. But in this case, the decision maker’s rationale must include an explanation for why deception is appropriate in this case and not in other cases where it is convenient for the company, and it must avoid deceiving in these other cases. If no such explanation was part of the decision making process, then the deception is not generalizable. A similar analysis may apply to the Netflix case.
LikeLike