ASU Experts Weigh The Risks Of Innovation

By Nicholas Gerbis
Published: Thursday, June 8, 2017 - 7:05am
Updated: Monday, February 5, 2018 - 6:30pm

Audio icon Download mp3 (5.87 MB)
uber self-driving cars
(Photo courtesy of Uber)
Uber tested its self-driving cars in Arizona in early 2017.

Despite at least one death, a number of unresolved technical issues and a 2016 AAA survey in which three out of four U.S. drivers said they were afraid to ride in them, self-driving cars keep motoring on.

Health officials battle Zika and dengue fever via tens of millions of genetically modified “Terminator” mosquitos — male Aedes aegypti whose offspring die before reaching maturity. Genetically modified foods, dubbed “Frankenfoods” by opponents, continue to spread.

Both inspire their share of fear in the public, safety assurances notwithstanding.

In 1970, Alvin Toffler coined the term "future shock" to describe the anxiety associated with overly rapid change. Driving that change is an outmoded view, still current in some circles, that Andrew Maynard, director of Arizona State University's Risk Innovation Lab, summed up well:

“We need to innovate as much and as fast as possible and, if there are problems, somebody will find a solution somewhere.”

Maynard added that, while such tactics worked in the past, the modern world cannot tolerate them.

“We’re now living in such a tightly coupled world — and, by tightly coupled, I mean everything we do is connected to something that somebody else does — that we can no longer afford to be so laissez faire about the risks we take," Maynard said.

Our generation does not hold a patent on unease about the quickening pace of technical innovation. But neither did our forebears face quite so many technologies with so much capacity for wide-scale disruption. Consider bioscience alone, in which synthetic biology lets us design and build new biological entities; CRISPR-Cas9 lets us target and edit specific genes; and gene drives provide a genetic pipeline that shunts changes to progeny.Clearly, as much as we embrace novel technologies, we also sense the need to keep a closer eye on their progress. But, even assuming we’re aware of half of these advances, or can grasp a portion of what they do, our daily worries often max out our mental bandwidth such that we have little time for issues like risk — except when disaster strikes.

In an influential 1972 paper, American economist Anthony Downs contended that certain impactful, but intractable, public matters undergo an “issue-attention cycle,” in which public focus fades after the initial crisis passes.

Daniel Sarewitz, director of Washington, D.C., office of ASU’s Consortium for Science, Policy and Outcomes, said such a panic-and-forget mentality cannot suitably address risk.

“We have to get away from that attention cycle problem, where it’s crisis driven, and instead build into the innovation process itself this ability to think through what we’re doing as we’re doing it,” Sarewitz said.

artificial-intelligence graphic
graphicstock.com
ASU’s David Guston says emerging technologies like artificial intelligence and neurotechnologies are "high-consequence but high-uncertainty.”

That goes double for emerging technologies like artificial intelligence, neurotechnologies and robotics, said ASU’s David Guston, director of ASU’s School for the Future of Innovation in Society.

“They’re high-consequence but high-uncertainty,” Guston said.

But risk entails more than odds and outcomes; it involves values and, beyond life and limb, those can vary — a lot. Consider qualities like dignity or autonomy, for example.

“They’re quite intangible, but these are the things that really motivate people — or really crush them when they’re taken away,” Maynard said.

Consequently, understanding innovation and risk means looking beyond good and bad, Guston said.

“What those simple dichotomies end up missing is that change is going to happen and, the question is, to whom do the benefits flow, and who are exposed to the risks?”

Sarewitz agreed, adding that the status of certain people restricts their capacity for responding to disasters, whether natural or human in origin.

“When there are stresses on society, the people who have less economic, social, intellectual, institutional wherewithal are going to get screwed more than others," Sarewitz said.

Maynard expanded upon risk’s subjectivity, saying we should think more in terms of prospects gained and lost — and of the values placed on them.

“It comes back to what’s really important to us — what’s really valuable to us — and what are we prepared to do to either gain that value or protect that value?”

There’s no getting around this clash of values; nor can statistics make the decisions for us, said Sarewitz. “There’s nothing about the number, itself, that tells you what to do," Sarewitz said. "There is always a value judgement about whether or not what you’re doing to reduce what you see as a risk is worth the cost.”

Still, maybe we can agree to eschew certain avenues of research. The film "Jurassic Park" has become synonymous with this debate — as has a certain quote from Jeff Goldblum’s character Ian Malcolm:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

This image of “science gone mad” — whether through unscrupulous research or “bad actor” scenarios such as bioterror — occupies a central position among our fears about cutting-edge research. But it might ultimately prove the most immaterial, because experts agree that neither risk nor innovation will halt any time soon.

“There is the argument that some types of knowledge we probably shouldn’t have. The only difficulty is, it’s really, really hard to stop people discovering that knowledge,” Maynard said.

So if we’re stuck with risk, the question arises: are we any good at assessing it?

After all, neuroscience shows that our brains are shot-through with shortcuts and peppered with predispositions. In prehistoric periods, these traits kept us alive, but today they can just as easily impede our ability to judge risks objectively. For example, said neuroscientist Samuel McClure of ASU’s Department of Psychology, many of us distrust air travel, despite its safety record, for one simple reason.

“People don’t like ceding control.”

It’s not hard to see how such qualms could translate to, say, self-driving cars.

“How comfortable are you going to be getting on the freeway and pressing a button and just letting the car take over?” McClure said.

McClure expanded on other ways that our mental foibles can hinder risk assessments.

“If you present things slightly differently, then you can get people to behave different ways.”

Part of that consists of a practice called "framing."

“So you can change people’s risk preferences by the way you phrase something,” McClure said.

Still, we could be as rational as a Vulcan actuary, and we’d still have to overcome basic uncertainty. In his 1980 book, "The Social Control of Technology," David Collingridge described his namesake dilemma, which states that the easiest time to regulate technology is early on, before economic interests take root. Unfortunately, that’s also when you know the least about what might go wrong.

“In 1910, we didn’t know what the risks of internal combustion engines were going to be vis-à-vis the climate, right? Now we know. Well, you can’t tell a billion people to stop driving,” Sarewitz said.

Complexity, too, carries its own cost, said Maynard.

“There’s a Theory of Normal Accidents that basically says, in any complex system, you’re always going to have some accidents — you can’t get rid of them.”

Maynard used the example of colliding technologies, such as the "Internet of Things," which wires the physical world to the digital, thereby leaving it vulnerable to hacking. All four experts cited the unprecedented effects that social media and global communications have exerted on social norms and the current political climate.

Among those impacts is a tendency to confirm our own biases, which can erode our ability to gauge innovations and risks. As we view science ever more through a political lens, and as we contract that lens by restricting our data diet to sources that share our views, our capacity for tolerating each other’s values — and grasping how those values relate to risks — suffers.

“You need to have specialized people to really evaluate and report back. And if the people, the scientists, you have do that report back, and then nobody believes them, that’s very problematic,”  McClure said.

So what to do? Experts say we must accept risk as inescapable. Rather than focusing on our fears, they say, we should engage in a nuanced dialogs about our goals. Through education, we can learn to think critically about these issues and to ask better questions.

Ultimately, they say, we cannot count on laws and regulations alone. We must take personal responsibility for our futures — and hold companies, scientists, officials and democratic institutions accountable for building systems with greater resilience.

“It’s not a formula. It’s not like we — again, we can’t say, ‘We know what the risks are going to be and, therefore, we’re going to pursue the technology this way and not that way.’ But we can keep our eyes open in a lot more determined fashion,” Sarewitz said.

Business Science