Davis Hake was interviewed by CEB on new bills in California to regulate AI safety and transparency. Below is an excerpt from the article.
[Davis Hake] says that most developers and deployers of AI systems support some form of regulation on the technology. The divide in opinion is about whether that regulation should broadly apply to the technology itself, or should be tailored to different contexts with specific types of harms. “A driverless tractor faces a different risk profile than autonomous vehicles, even though it’s the same technology,” Hake says.
Either way, Hake says a lot of states “are hungry to get involved in this area,” and California in particular “sees an opportunity to drive the national discussion on this through quick action.” Supporters of SB 1047 see the bill as a first stake in the ground: not perfect, but setting a precedent for the establishment of a future AI liability regime. Opponents of the bill, however, say the legislation is overly broad, not context-specific, and tries to guess at future unknown harms: a trinity of errors in technology policymaking.
Hake adds that opponents of the bill wonder whether a state jurisdiction, even one as large as California, is the proper venue for AI safety regulations when similar policy is already being discussed at the national and international level – especially after the European Union passed its comprehensive AI law in March, with which U.S. companies will have to find some level of compliance or parity.
For the full article, click here.