A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of a busload of civilians or lose a long-sought terrorist. How does a
“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Elon Musk called A.I. “a fundamental risk to the existence of civilization.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own – as the programmers behind AlphaGo have done, to wondrous results – where do we draw moral and computational lines? Leading specialists in A.I., neuroscience, and philosophy will tackle the very questions that may define the future of humanity.