Helen Toner, a former OpenAI board member and the director of strategy at Georgetown’s Center for Security and Emerging Technology, expresses concern about Congress’ potential reactions to AI policymaking. She fears that if the current state of affairs does not change, Congress may make rash decisions without proper consideration.
“Congress right now — I don’t know if anyone’s noticed — is not super functional, not super good at passing laws, unless there’s a massive crisis,” Toner said at TechCrunch’s StrictlyVC event in Washington, D.C. on Tuesday. “AI is going to be a big, powerful technology — something will go wrong at some point. And if the only laws that we’re getting are being made in a knee-jerk way, in reaction to a big crisis, is that going to be productive?”
Toner’s comments emphasize the ongoing impasse in U.S. AI policy, as a White House-sponsored summit on AI approaches on Thursday. While President Joe Biden signed an executive order in 2023 enacting certain consumer protections and requiring developers of AI systems to share safety test results with relevant government agencies, Congress has yet to pass any comprehensive legislation on AI. This is in stark contrast to regulations implemented by the EU through their recently enacted AI Act. With the 2024 elections looming, it is unlikely that any progress will be made in the near future.
As the Brookings Institute highlights in a report, this void in federal rulemaking has prompted state and local governments to step in, resulting in a proliferation of AI-related bills. In 2023 alone, state legislators introduced over 440% more AI-focused bills than the previous year. In recent months, nearly 400 new state-level AI laws have been proposed according to the lobbying group TechNet.
For example, California advanced approximately 30 new bills last month aimed at protecting consumers and jobs in relation to AI. Similarly, Colorado approved a measure that requires AI companies to use “reasonable care” in the development of their technology to prevent discrimination. And in March, the ELVIS Act was signed into law in Tennessee, prohibiting the use of AI to clone musicians’ voices or likenesses without their explicit consent.
However, this patchwork of state rules and regulations only adds to the complexity and uncertainty for both the industry and consumers.
For instance, the definition of “automated decision making” in many state laws varies. While some laws consider decisions to be automated as long as there is some level of human involvement, others have stricter criteria.
Toner argues that a high-level federal mandate would be more effective in this situation.
“Some of the smarter and more thoughtful actors that I’ve seen in this space are trying to say, OK, what are the pretty light-touch — pretty common-sense — guardrails we can put in place now to make future crises — future big problems — likely less severe, and basically make it less likely that you end up with the need for some kind of rapid and poorly-thought-through response later,” she said.