Sean Franzblau, Dan Hayes, Heather West published a two-part series, “The New Rules of AI” in Legaltech News, a division of Law.com focused on technology and innovation. The following is an excerpt:
The New Rules of AI: Part 1—Managing Risk
The global economy—and the nature of work itself—is transforming as businesses and other organizations adopt increasingly advanced artificial intelligence (AI) systems to perform core analytical, creative, and decision-making functions. While AI technologies carry revolutionary economic potential, they can pose significant risks if not managed properly. As the use of AI systems spreads across industries, regulators are working to keep up with the technology. Congress has yet to promulgate federal laws on foundational issues regarding how organizations manage AI safety, trustworthiness, and accountability. Courts are wrestling over the extent to which basic legal principles that undergird the economy—such as property rights and liability rules—should apply to these technologies. State legislatures and various federal agencies have stepped in to fill the void, creating a patchwork of legal and technical standards that can be difficult to keep up with.
Against this backdrop, the primary burden of managing AI’s risks often falls on corporate officers tasked with designing, implementing, and supervising internal AI governance programs for their organizations.
The New Rules of AI: Part 2—Designing and Implementing Governance Programs
We now turn to the five necessary steps that organizations must undertake in designing and implementing effective AI governance programs.