With each passing year, artificial intelligence is becoming more prevalent in daily life. Yet, even though AI is no longer a brand-new technology, few norms and broadly agreed-upon rules govern its use.
What should those norms and rules be as we enter a future in which more (and more critical) decisions are made by computers?
Two Sigma co-founder David Siegel joined a panel of luminaries in the field of AI at the 2019 World Economic Forum in Davos to share ideas on this and related questions.
WIRED’s Nicolas Thompson moderated the discussion, entitled “Setting the Rules for the AI Race.” Other panelists included:
- Kai-Fu Lee, chairman and founder of Sinovation Ventures
- Amitabh Kant, CEO of the NITI Aayog
- Amy Webb, CEO of Future Today Institute
- Jim Hagemann Snabe, chairman of Siemens and Maersk
Old rules for a new world?
Over the course of the hour-long talk, several recurring themes emerged.
In a world where humans are going to delegate more and more decisions to computers, questions of accountability are rife: How can we mitigate problems like embedded bias in datasets and algorithms? Do we need new laws to govern how AI algorithms work?
“We should start to really look at whether or not we’re using the old rules properly,” said Siegel, adding that in cases of product liability or the potential for AI and Big Data to drive the formation of monopolies, most countries already have relevant laws on the books.
“I think people are becoming uncomfortable with computers automatically making decisions,” he added, noting that computers have, in fact, been doing so for a long time, and that in critical applications, an extensive testing and certification process is warranted.
AI and the “explainability” problem
The panel also discussed whether machine learning algorithms should be required to be able to “explain” their outputs–currently one of the most difficult technical challenges practitioners face. In fact, it’s one of the most active areas of research in the field today, and the participants agreed that in an increasingly algorithmic world, the need for explainability (or “interpretability”) is acute.
A double-standard could, however, be at play, Siegel noted. The challenge of providing a clear, traceable rationale for decisions is not limited to AI; humans often have a similar shortcoming.
“This is the miracle of the human mind: It’s amazingly sophisticated. Of course, I can overfit the data,” he pointed out. When humans make a decision, however, even though they can usually provide an explanation, they are not aware of all the biases and other subtle factors that truly went into it.
Challenges and opportunities
The panel’s participants debated a wide range of additional issues related to artificial intelligence, Big Data, and society, including potential impacts on the future of democracy, differences in cross-border regulations and development priorities, and the extent to which humans should remain in the loop when it comes to automated decision-making.
Most agreed, however, that one of the biggest opportunities worldwide relates to harnessing massive datasets and powerful algorithms to advance healthcare research and medical science. With healthcare data still highly fragmented, significantly greater cooperation at the national, international, and institutional levels will be paramount.
“I think that the world should get together and decide that this is a terrific application for machine learning, and establish rules where every country agrees to contribute its health data to a global repository that AI researchers can openly use,” said Siegel.
“This would be motivating to the world and is something that machine-learning technology, I’m convinced, is very applicable towards. We should form an alliance globally and get the job done, and save millions of lives. We can do this in a decade.”
Watch the full discussion here.