Am I My Machine’s Keeper?
Devices that learn from and influence users pose new ethical dilemmas.
By Aditya Johri
The prospect of autonomous vehicles presents a variation on an old philosophy question: “You are in your driverless car and a child stumbles on to the road. What action should your car take? Should it run into a wall, possibly killing you, or run over the child?” The new ethical twist, of course, is this: Should the choice be programmed into the car—engineered—or should a button be provided so the driver can choose the setting? This is just one instance of innumerable choices that are coming up for engineers as even more software with artificial intelligence makes its way into engineered objects.
Another instance of how software applications are raising ethical questions is behavioral programming—using technology to change people’s behavior. We see this in devices such as the FitBit™ that are designed to improve the human condition, in this case by providing users with information about their physical activity that in turn motivates them to exercise more and take better care of their health. In the household, the Nest™ energy consumption monitoring device allows users to live more sustainably.
Other programmable technology is not so innocuous. A major example is the programming of slot machines in casinos to encourage high-risk, addicted gamblers to play even more. By making it seem as if they have almost won a jackpot, or by providing them easier access to loans, these machines target humans’ vulnerability. A less drastic but still questionable example is Facebook’s use of algorithms to manipulate your news feed so that you spend even more time on the site.
Dartmouth College philosophy professor James H. Moor anticipated this phenomenon in 1985 in his seminal paper, “What Is Computer Ethics?” He wrote, “The essence of the Computer Revolution is found in the nature of a computer itself. What is revolutionary about computers is logical malleability. Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs, and connecting logical operations. . . . The logic of computers can be massaged and shaped in endless ways through changes in hardware and software.” What we do with this property of logic malleability as engineers is of course up to us, but are we prepared for it? More critically, are we preparing our students for this new era?
Ethics education in engineering has looked primarily at the larger social context within which engineers work and to some extent has focused on the ethics of designing objects that might have harmful effects, either intentionally or as a result of unintended consequences. For the most part, engineered objects have lacked programmable agency in the manner that is possible via software code. Now, however, machines can learn from their users, change their functionality, and in turn change how users respond. This raises important questions about what role engineers need to play. Are they or should they be gatekeepers to a machine’s actions? Now that actions are programmable, should it be the job of the engineers to do so? Should designers be made to test and use their inventions before unleashing them onto the public? Should users be involved more in the design? What are the boundaries? In turn, what do we tell new engineers is their responsibility? How do we train them to better understand the impact of their design decisions?
In “Do Artifacts Have Politics?” Rensselaer Polytechnic Institute political theorist Langdon Winner cites the example of highways built around Long Island by master builder Robert Moses. The overpasses were constructed so low that buses couldn’t pass under them. As a consequence, poor people dependent on public transit, many of them black, were kept off Jones Beach, an acclaimed public park designed by Moses. Imagine a similar design decision today that prevents a certain population from being able to use a device to improve health, or one that prods them to spend money they do not have. And now imagine that the machine decides that it can make those decisions for the users because it’s “intelligent” enough to do so.
We are entering a new world of ethics, one where ethics can be programmed into engineered objects. Should it be?
Aditya Johri is an associate professor of electrical engineering at George Mason University.