On The Digital Life podcast, Jon Follett and Dirk Knemeyer discuss ethics and bias in AI, with Tomer Perry, research associate at the Edmond J. Safra Center for Ethics at Harvard University. What do we mean by bias when it comes to AI? And how do we avoid including biases we’re not even aware of?
If AI software for processing and analyzing data begins providing decision-making for core elements critical to our society we'll need to address these issues. For instance, risk assessments used in the correctional system have been shown to incorporate bias against minorities. When it comes to self-driving cars, people want to be protected, but also want the vehicle, in principle to "do the right thing" when encountering situations where the lives of both the driver and others, like pedestrians, are at risk. How we should deal with it? What are the ground rule sets for ethics and morality in AI, and where do they come from? Join us as we discuss.