Regional
Can California show the way forward on AI safety?
A new state bill aims to protect us from the most powerful and dangerous AI models.
Last week, California state Senator Scott Wiener (D-San Francisco) introduced a landmark new piece of AI legislation aimed at “establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.”
It’s a well-written, politically astute approach to regulating AI, narrowly focused on the companies building the biggest-scale models and the possibility that those massive efforts could cause mass harm.
As it has in fields from car emissions to climate change, California’s legislation could provide a model for national regulation, which looks likely to take much longer. But whether or not Wiener’s bill makes it through the statehouse in its current form, its existence reflects that politicians are starting to take tech leaders seriously when they claim they intend to build radical world-transforming technologies that pose significant safety risks — and ceasing to take them seriously when they claim, as some do, that they should do that with absolutely no oversight.
What the California AI bill gets right
One challenge of regulating powerful AI systems is defining just what you mean by “powerful AI systems.” We’re smack in the middle of the present AI hype cycle, and every company in Silicon Valley claims that they’re using AI, whether that means building customer service chatbots, day trading algorithms, general intelligences capable of convincingly mimicking humans, or even literal killer robots.
Defining the question is vital, because AI has enormous economic potential, and clumsy, excessively stringent regulations that crack down on beneficial systems could do enormous economic damage while doing surprisingly little about the very real safety concerns.
The California bill attempts to avoid this problem in a straightforward way: it concerns itself only with so-called “frontier” models, those “substantially more powerful than any system that exists today.” Wiener’s team argues that a model which meets the threshold the bill sets would cost at least $100 million to build, which means that any company that can afford to build one can definitely afford to comply with some safety regulations.
Even for such powerful models, the requirements aren’t overly onerous: The bill requires that companies developing such models prevent unauthorized access, be capable of shutting down copies of their AI in the case of a safety incident (though not other copies — more on that later), and notify the state of California on how they plan to do all this. Companies must demonstrate that their model complies with applicable regulation (for example from the federal government — though such regulations don’t exist yet, they may at some point). And they have to describe the safeguards they’re employing for their AI and why they are sufficient to prevent “critical harms,” defined as mass casualties and/or more than $500 million in damages.
The California bill was developed in significant consultation with leading, highly respected AI scientists, and released with endorsements from leading AI researchers, tech industry leaders, and advocates for responsible AI alike. It’s a reminder that despite vociferous, heated online disagreement, there’s actually a great deal these various groups agree on.
“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” Yoshua Bengio, considered one of the godfathers of modern AI and a leading AI researcher, said of the proposed law. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”
Of course, that’s not to say that everyone loves the bill.
What the California AI bill doesn’t do
Some critics have worried that the bill, while it’s a step forward, will be toothless in the case of a truly dangerous AI system. For one thing, if there’s a safety incident requiring a “full shutdown” of an AI system, the law doesn’t require you to retain the capability to shut down copies of your AI which have been released publicly, or are owned by other companies or other actors. The proposed regulations are easier to comply with, but because AI, like any computer program, is so easy to copy, it means that in the event of a serious safety incident, it wouldn’t actually be possible to just pull the plug.
“When we really need a full shutdown, this definition won’t work,” analyst Zvi Mowshowitz writes. “The whole point of a shutdown is that it happens everywhere whether you control it or not.”
There are also many concerns about AI that can’t be addressed by this particular bill. Researchers working on AI anticipate that it will change our society in many ways (for better and for worse), and cause diverse and varied harms: mass unemployment, cyberwarfare, AI-enabled fraud and scams, algorithmic codification of biased and unfair procedures, and many more.
To date, most public policy on AI has tried to target all of those at once: Biden’s executive order on AI last fall mentions all of these concerns. These problems, though, will require very different solutions, including some we have yet to imagine.
But existential risks, by definition, have to be solved to preserve a world in which we can make progress on all the others — and AI researchers take seriously the possibility that the most powerful AI systems will eventually pose a catastrophic risk to humanity. Regulation addressing that possibility should therefore be focused on the most powerful models, and on our ability to prevent mass casualty events they could precipitate.
At the same time, a model does not have to be extremely powerful to pose serious questions of algorithmic bias or discrimination — that can be done with an extremely simple model that predicts recidivism or eligibility for a mortgage on the basis of data that reflects decades of past discriminatory practices. Tackling those issues will require a different approach, one less focused on powerful frontier models and mass casualty incidents and more on our ability to understand and predict even simple AI systems.
No one law could possibly solve every challenge that we’ll face as AI becomes a bigger and bigger part of modern life. But it’s worth keeping in mind that “don’t release an AI that will predictably cause a mass casualty event,” while it’s a crucial element of ensuring that powerful AI development proceeds safely, is also a ridiculously low bar. Helping this technology reach its full potential for humanity — and ensuring that its development goes well — will require a lot of smart and informed policymaking. What California is attempting is just the beginning.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!