We live in an age of transformative scientific powers, capable of changing the very nature of the human species and radically remaking the planet itself.
Advances in information technologies and artificial intelligence are combining with advances in the biological sciences—including genetics, reproductive technologies, neuroscience, synthetic biology—as well as advances in the physical sciences to create breathtaking synergies, now recognized as the Fourth Industrial Revolution.
These new powers hold great promise for curing and preventing disease, improving agricultural output and enhancing quality of life in many ways; however, no technology is neutral—and the powers of the Fourth Industrial Revolution certainly are not.
Since these technologies will ultimately decide so much of our future, it is deeply irresponsible not to consider together whether and how to deploy them. Thankfully, there is growing global recognition of the need for governance. Professor Klaus Schwab, Executive Chairman of the World Economic Forum, for example, has called for “agile governance,” achieved through public-private collaborations among business, government, science, academia and nongovernmental civic organizations. Wendell Wallach and Gary Marchant, both scholars in this area, have proposed “governance coordinating committees,” or GCCs, that would be created for each major technology sector and serve as honest brokers.
Whatever forms governance takes, and it will (and should) take many forms, we need to make sure that governing bodies and public discussion address four critical questions. The answers to these questions will require both scientific input and a willingness to discuss the ethical and social implications of the choices we face.
1. Should the technology be developed in the first place?
This question, for example, is now being asked with regard to a possible ban on autonomous lethal weapons or militarized robots. To date, there is no record of a lethal autonomous weapon picking its own target and destroying it without humans being involved in the decision-making; however, many experts see this prospect materializing in the near future, unless a worldwide ban is instituted soon.
Another example is geoengineering, which is the use of technology to alter planetary conditions, often to change the climate so as to reduce the earth’s warming. This is a truly global issue that needs a collective approach, since one nation-state may make climate changes that are beneficial for itself, but detrimental to others. Furthermore, some of the strategies – for example, proposals to seed the stratosphere with nano-particles – carry unknown but potentially large risks for the planet as a whole. Science may or may not be able to quantify the risk, but even if we have risk estimates, discerning how much risk we should take, if any, is not something science alone can answer. Ultimately it is a moral assessment we need to make collectively.
2. If a technology is going to proceed, to what ends should it be deployed?
During the Fourth Industrial Revolution, there will be a wide variety of so-called human enhancements on offer. Some will focus on eliminating diseases; others may extend human capacities we wish to promote or reduce, such as greater athletic ability, greater memory, or less aggressive behavior.
Rather than making endorsements or prohibitions about enhancements in general, each type should be considered on a case-by-case basis in terms of how likely it is to advance, or diminish, human flourishing.
3. If the technology is to go forward, how should it proceed?
It matters how a technology is researched and how it enters the world. For example, The National Academy of Sciences, Engineering and Medicine in the United States recently issued a landmark report that takes a precautionary approach to the use of gene drives. Gene drives are technologies, which in combination with CRISPR Cas9 gene editing, can exponentially increase the prevalence of specific genetic elements in a whole population of certain kinds of wild plants or animals. Right now, for example, gene drives are being considered as a way of controlling, or even eradicating, mosquitoes that are disease vectors for human illnesses, like malaria and Zika. The National Academies’ report encourages the development of gene drive technology, but calls for carefully paced research, first in laboratory settings and small field studies, before engineered organisms are released into the wild.
4. Once norms have been set, how will the field be monitored to ensure adherence?
Right now, there are guidelines for many aspects of research and technology diffusion, but serious gaps in our ability to monitor adherence or hold bad actors accountable. For example, there are sound regulations for the management of some kinds of toxic chemicals, but extremely inadequate funds for regulatory staff to monitor and inspect chemical sites. Governance mechanisms for the 21st century will have to grapple with what areas need mandatory regulation and how to enforce them.
“Facts alone are insufficient”
The answers to these questions need to be informed by facts, but facts alone are insufficient. All four questions require a willingness to discuss the values we hold dear, even when values discussions may lead to controversy and conflict.
Safety is perhaps the least controversial value. Most of us around the globe believe that there is an obligation to reduce the likelihood that individuals will be harmed by new technologies. Indeed, the primary responsibility of most existing regulatory bodies is to promote safety.
But there are other very important values at stake, and they are often given short shrift. First, we should commit to equity – to doing all that is possible to ensure that all people, regardless of their economic means, will have access to technology’s benefits. Otherwise, we run the risk of exacerbating what Hastings Center scholar Erik Parens has called “the already obscene gap between the haves and have nots.”
Even harder to talk about are values that have to do with ways of being in the world, with how we humans relate to one another and to the natural environment.
For example, some people worry that human genetic engineering could transform parent-child bonds, encouraging “hyper-agency” on the part of parents who would focus more on designing babies to suit their needs than on nurturing children to become who they will be.
Values like stewardship and respect for the intrinsic worth of wilderness areas are often invisible in our discussions or falsely framed as in opposition to economic development. And underlying so many of these issues is the fundamental ethical question about how much we humans should intervene in changing the nature of our species, other species, and the environment. Is there a level of human intervention that crosses a boundary into hubris, or that erodes cherished virtues like living in harmony with nature, rather than in dominion over it?
In short, the Fourth Industrial Revolution has brought us enormous powers. Now we must use them wisely. Governance, which will take many forms, must involve the public as well as experts. And, whatever forms it takes, we should anticipate at least four critical questions that need to be answered, no matter the technology sector. In answering those questions, we will need deliberate, thoughtful conversations about values that are often hard to reconcile. This path will engender strong differences of opinion, but that is exactly why we must embrace the dialogue – and soon.
Editor's Note: This piece originally appeared in the World Economic Forum Agenda's blog.
Mildred Z. Solomon is President of The Hastings Center. She is also a professor at Harvard Medical School, where she directs the school’s Fellowship in Bioethics, a program that builds the bioethics capacity of the Harvard teaching hospitals.