There’s a false narrative surrounding artificial intelligence: that it cannot be regulated. This idea stems, in part, from the beliefs that regulation will stifle innovation and hamper economic potential, and that AI will naturally evolve beyond its original code.
In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Joanna J. Bryson, incoming professor of ethics and technology at the Hertie School of Governance in Berlin, beginning February 2020. Professor Bryson begins by explaining the difference between intelligence and AI, and how that foundational understanding can help us to see how regulations are possible in this space.
“We need to be able to go back then and say, ‘OK, did you file a good process?’ A car manufacturer, they’re always recording what they did because they do a phenomenally dangerous and possibly hazard thing … It’s the same thing with software,” Bryson explains. It is the responsibility of nations to protect those inside its borders, and that protection must extend to data rights. She discusses how the EU General Data Protection Regulation—a harmonized set of rules that covers a large area and crosses borders—is an example of international cooperation that resulted in a set of standards and regulations for AI development.