Benefit or Bane?
As technological breakthroughs accelerate, engineers will need to address the ethics and potential risks of their creations.
By Vivek Wadhwa
Rarely do scientists seek limits on the use of technologies that they invented. Yet Jennifer Doudna told a journalist that her technology needed to be put on hold pending a broader societal discussion of its scientific and ethical issues. This was CRISPR-Cas9, a bacterial system for engineering genomes that her team at the University of California, Berkeley had developed. In an essay in the December issue of Nature, Doudna said she would lie awake at night wondering whether she should stay out of the ethical storm that was brewing around it.
Many scientists and engineers will soon face a dilemma of this type. A broad range of technologies are advancing exponentially—and converging. Amazing things are becoming plausible, such as the ability to cure genetic diseases and edit plant genes for greater nutrition. These advances are making it possible to solve the grand challenges of humanity. But they also are creating new nightmares. The CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) technology alone could change the genetic makeup of the human race.
Every exponentially advancing technology entails such ethical issues. By 2023, the computational ability of a $1,000 processor will surpass that of the human brain. Its abilities will continue doubling every 18 months. Artificial intelligence will soon pass the Turing test for having human-like abilities. Delivery and surveillance drones will clog our skies. Robots and self-driving cars will take human jobs. And unregulated nanomaterials, smaller than our blood’s filtration systems can exclude, will create unprecedented health risks. All of these technologies will be used for good and for ill.
There is not even a consensus on what is good and bad, or on what is ethical or moral. We want the benefits that technologies provide—but we don’t want to deal with the risks, which, usually, we don’t even understand.
Engineers are typically experts in technology, not in the sociological issues that technology can create. Policymakers are even more clueless: they don’t understand technology or its implications. Nor can we expect them to, because policy and law are meant to be developed through hindsight. Laws are essentially “codified ethics”—guidelines accepted by members of a society that are developed on the basis of a social consensus—and are often decades, even centuries in the making.
Consider the question of privacy, an issue that touches all the data gathered by websites and smartphones and cameras on streets, shopping malls, and drones. Our privacy laws date back to the late 19th century, when newspapers first started publishing personal information. Boston lawyer Samuel Warren objected to their publishing social gossip about his family, leading his law partner, future U.S. Supreme Court Justice Louis Brandeis, to pen the Harvard Law Review article “The Right to Privacy.” Creating the idea that there exists a right to be left alone, as there is a right to private property, it laid the foundation of today’s privacy laws.
Our copyright laws date back to the 1700s, some 300 years after creation of the printing press. Intellectual property-rights laws gained traction in the days of the steam engine.
The gaps between technology and law and ethics are expanding exponentially. Few people understand what possibilities the new technologies create. And that is why engineers need to do what Jennifer Doudna did: Explain the good and the bad of their technologies and engage in ethical debates. They also must help develop the policies and ensure that their creations are used to benefit and uplift mankind.
Vivek Wadhwa is a scholar of entrepreneurship and a researcher at Duke University’s Pratt School of Engineering. He is also affiliated with Stanford and Singularity universities.