THE UNITED STATES GOVERNMENT ought to establish a novel institution to regulate artificial intelligence, and it should confine the progress of language models like OpenAI’s GPT-4 to only those companies granted authorized permits. This recommendation emanates from a bipartisan coalition of senators, namely Democrat Richard Blumenthal and Republican Josh Hawley, who unveiled a legislative framework yesterday intended to serve as a blueprint for forthcoming laws and wield influence over other bills in Congress.
According to the proposal, the development of facial recognition and other “high-risk” AI applications would necessitate a government license. Companies seeking such licenses would be required to conduct thorough assessments of potential harm caused by AI models prior to deployment, openly disclose instances of post-launch mishaps, and permit independent third-party audits of their AI models.
Additionally, the framework proposes that companies divulge details of the training data used to create an AI model, and individuals who suffer harm due to AI should have the right to take legal action against the responsible company.
These proposals put forth by the senators are likely to have a significant impact in the coming days and weeks as the AI regulation debates intensify in Washington. Next week, Blumenthal and Hawley will chair a Senate subcommittee hearing focused on devising meaningful mechanisms to hold businesses and governments accountable when deploying AI systems that harm people or violate their rights. Testimonies are expected from Microsoft President Brad Smith and William Dally, the chief scientist of chipmaker Nvidia.
The following day, Senator Chuck Schumer will convene the first in a series of meetings to discuss AI regulation, a challenge he has characterized as “one of the most difficult things we’ve ever undertaken.” Attendees at these meetings include prominent tech executives with a vested interest in AI, such as Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia. Members from the Writers Guild and the AFL-CIO, representing trade unions likely to be subjected to AI algorithms, as well as experts specializing in safeguarding human rights in the realm of AI, such as Deb Raji from UC Berkeley and Rumman Chowdhury, CEO of Humane Intelligence and former ethical AI lead at Twitter, will also be present.
Anna Lenhart, formerly leading an AI ethics initiative at IBM and currently a PhD candidate at the University of Maryland, views the senators’ legislative framework as a welcome development following years of AI experts testifying before Congress to advocate for AI regulation.
“It is truly invigorating to witness their proactive approach, rather than waiting for a series of forums or a commission that would spend considerable time conversing with experts, ultimately producing a similar list,” says Lenhart.
Nevertheless, Lenhart expresses uncertainty about how an AI oversight body could effectively possess the expansive technical and legal knowledge needed to govern the diverse range of technologies incorporating AI, ranging from autonomous vehicles to healthcare and housing. “This is where I find myself slightly hesitant regarding the notion of a licensing regime,” Lenhart adds.
The idea of using licenses to restrict the development of powerful AI systems has gained traction both in the industry and in Congress. OpenAI CEO Sam Altman, during his testimony before the Senate in May, proposed the notion of licensing for AI developers, a regulatory measure that could, arguably, said his company in maintaining its leading position. Additionally, senators Lindsay Graham and Elizabeth Warren recently introduced a bill that requires tech companies above a certain size to obtain a government AI license, focusing specifically on digital platforms.
Lenhart is not the only AI or policy expert skeptical about government licensing for AI development. In May, this idea faced criticism from Americans for Prosperity, a political campaign group with a libertarian-leaning stance, as they fear it may stifle innovation. The digital rights nonprofit Electronic Frontier Foundation also expressed concerns regarding potential industry capture by well-financed companies or those with influential connections. In an attempt to address these concerns, the framework released yesterday recommends stringent conflict-of-interest regulations for staff belonging to the AI oversight body.
Blumenthal and Hawley’s new framework for future AI regulation leaves certain questions unanswered. It remains unclear whether AI oversight would be entrusted to a newly established federal agency or a department within an existing federal agency. The senators have also not specified the criteria that would classify a particular use case as “high risk,” warranting the need for a development license.
Michael Khoo, the program director for climate disinformation at the environmental nonprofit Friends of the Earth, considers the new proposal to be a positive initial step but stresses the necessity for additional details to adequately evaluate its ideas. Friends of the Earth, as part of a coalition comprising environmental and tech accountability organizations, has written a letter to Schumer and plans to display a mobile billboard circling around Congress next week, urging lawmakers to prevent energy-intensive AI projects from exacerbating climate change.