California’s landmark frontier AI law to bring transparency

12 hours ago 2

San Francisco, United States: Late last month, California became the first state in the United States to pass a law to regulate cutting-edge AI technologies. Now experts are divided over its impact.

They agree that the law, the Transparency in Frontier Artificial Intelligence Act, is a modest step forward, but it is still far from actual regulation.

Recommended Stories

list of 4 itemsend of list

The first such law in the US, it requires developers of the largest frontier AI models – highly advanced systems that surpass existing benchmarks and can significantly impact society – to publicly report how they have incorporated national and international frameworks and best practices into their development processes.

It mandates reporting of incidents such as large-scale cyber-attacks, deaths of 50 or more people, large monetary losses and other safety-related events caused by AI models. It also puts in place whistleblower protections.

“It is focused on disclosures. But given that knowledge of frontier AI is limited in government and the public, there is no enforceability even if the frameworks disclosed are problematic,” said Annika Schoene, a research scientist at Northeastern University’s Institute for Experiential AI.

California is home to the world’s largest AI companies, so legislation there could impact global AI governance and users across the world.

Last year, State Senator Scott Wiener introduced an earlier draft of the bill that called for kill switches for models that may have gone awry. It also mandated third-party evaluations.

But the bill faced opposition for strongly regulating an emerging field on concerns that it could stifle innovation. Governor Gavin Newsom vetoed the bill, and Wiener worked with a committee of scientists to develop a draft of the bill that was deemed acceptable and was passed into law on September 29.

Hamid El Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, told Al Jazeera that “some accountability was lost” in the bill’s new iteration that was passed as law.

“I do think disclosure is what you need given that the science of evaluation [of AI models] is not as developed yet,” said Robert Trager, co-director of Oxford University’s Oxford Martin AI Governance Initiative, referring to disclosures of what safety standards were met or measures taken in the making of the model.

In the absence of a national law on regulating large AI models, California’s law is “light touch regulation”, says Laura Caroli, senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).

Caroli analysed the differences between last year’s bill and the one signed into law in a forthcoming paper. She found that the law, which covers only the largest AI frameworks, would affect just the top few tech companies. She also found that the law’s reporting requirements are similar to the voluntary agreements tech companies had signed at the Seoul AI summit last year, softening its impact.

High-risk models not covered

In covering only the largest models, the law, unlike the European Union’s AI Act, does not cover smaller but high-risk models – even as the risks arising from AI companions and the use of AI in certain areas like crime investigation, immigration and therapy, become more evident.

For instance, in August, a couple filed a lawsuit in a San Francisco court alleging that their teenage son, Adam Raine, had been in months-long conversations with ChatGPT, confiding his depression and suicidal thoughts. ChatGPT had allegedly egged him on and even helped him plan this.

“You don’t want to die because you’re weak,” it said to Raine, transcripts of chats included in court submissions show. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”

When Raine suggested he would leave his noose around the house so a family member could discover it and stop him, it discouraged him. “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

Raine died by suicide in April.

OpenAI had said, in a statement to The New York Times, its models were trained to direct users to suicide helplines but that “while these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade”.

Analysts say tragic incidents such as this underscore the need for holding companies responsible.

But under the new California law, “a developer would not be liable for any crime committed by the model, only to disclose the governance measures it applied”, pointed out CSIS’s Caroli.

ChatGPT 4.0, the model Raine interacted with, is also not regulated by the new law.

Protecting users while spurring innovation

Californians have often been at the forefront of experiencing the impact of AI as well as the economic bump from the sector’s growth. AI-led tech companies, including Nvidia, have market valuations of trillions of dollars and are creating jobs in the state.

Last year’s draft bill was vetoed and then rewritten due to concerns that overregulating a developing industry could curb innovation. Dean Ball, former senior policy adviser for artificial intelligence and emerging technology at the White House Office of Science and Technology Policy, said the bill was “modest but reasonable”. Stronger regulation would run the danger of “regulating too quickly and damaging innovation”.

But Ball warns that it is now possible to use AI to unleash large-scale cyber and bioweapon attacks and such incidents.

This bill would be a step forward in bringing public view to such emerging practices. Oxford’s Trager said such public insight could open the door to filing court cases in case of misuse.

Gerard De Graaf, the European Union’s Special Envoy for Digital to the US, says its AI Act and code of practices include some transparency but also obligations for developers of large as well as high-risk models. “There are obligations of what companies are expected to do”.

In the US, tech companies face less liability.

Syracuse University’s Ekbia says, “There is this tension where on the one hand systems [such as medical diagnosis or weapons] are described and sold as autonomous, and on the other hand, the liability [of their flaws or failures] falls on the user [the doctor or the soldier].”

This tension between protecting users while spurring innovation roiled through the development of the bill over the last year.

Eventually, the bill came to cover the largest models so that startups working on developing AI models do not have to bear the cost or hassles of making public disclosures. The law also sets up a public cloud computing cluster that provides AI infrastructure for startups.

Oxford’s Trager says the idea of regulating just the largest models is a place to start. Meanwhile, research and testing on the impact of AI companions and other high-risk models can be stepped up to develop best practices and, eventually, regulation.

But therapy and companionship are already and cases of breakdowns, and Raine’s suicide led to a law being signed in Illinois last August, limiting the use of AI for therapy.

Ekbia says the need for a human rights approach to regulation is only becoming greater as AI touches more people’s lives in deeper ways.

Waivers to regulations

Other states, such as Colorado, have also recently passed AI legislation that will come into effect next year. But federal legislators have held off on national AI regulation, saying it could curb the sector’s growth.

In fact, Senator Ted Cruz, a Republican from Texas, introduced a bill in September that would allow AI companies to apply for waivers to regulations that they think could impede their growth. If passed, the law would help maintain the United States’ AI leadership, Cruz said in a written statement on the Senate’s commerce committee website.

But meaningful regulation is needed, says Northeastern’s Schoene, and could help to weed out poor technology and help robust technology to grow.

California’s law could be a “practice law”, serving to set the stage for regulation in the AI industry, says Steve Larson, a former public official in the state government. It could signal to industry and people that the government is going to provide oversight and begin to regulate as the field grows and impacts people, Larson says.

Read Entire Article






<