Critics contend that the proposed legislation ignores the wider picture by concentrating on the existential threat posed by AI
A new measure in California, USA, that seeks to regulate large-scale artificial intelligence models has run into stiff resistance from a number of tech sector players, including investors, startup founders, AI researchers, and supporters of open-source software. Scott Weiner, a state senator from California, presented the legislation, which is known as SB 1047.
According to Weiner, the measure requires creators of sizable, potent AI systems to follow sensible safety guidelines. Opponents of the law contend that it will undermine innovation and put the AI sector as a whole in danger.
The contentious bill was approved by the California assembly in May and is currently making its way through several committees. The bill may be referred to Governor Gavin Newsom for approval following a final vote in August. SB 1047, if enacted, would be the first significant AI law in the country, and it would come from a state that is home to numerous large tech companies.
What does the bill recommend?
Known by another name, the “Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act,” SB 1047 seeks to hold top AI firms, such as Meta, OpenAI, Anthropic, and Mistral, responsible for the potentially disastrous risks that come with the quickly developing field of artificial intelligence.
The bill’s main objective is organisations using massive frontier AI models. According to this definition, “large” refers to AI systems that have been trained with processing capacity equivalent to (10^{26}) floating-point operations per second (FLOPS) and at a cost greater than $100 million (about Rs 834 crore). The law also covers AI models optimised with computational power more than (3 \times 10^{25}) FLOPS.
According to the law, “Future developments in artificial intelligence may present significant risks to public safety and security if sufficient human control is not in place. This covers the possibility of developing and disseminating WMDs, such as chemical, biological, and nuclear weapons, in addition to cyberoffensive capabilities.”
The bill’s most current iteration suggests that those who develop large-scale AI models might be held liable for “critical harms.” These include the use of AI in the development of chemical or nuclear weapons, the hacking of critical infrastructure, and the commission of crimes against humanity by AI models working with little to no human oversight that result in fatalities, serious injuries, or destruction of property.
Developers are not held accountable, though, if the AI-generated output that results in harm or death is sourced from publicly accessible data. The measure is noteworthy because it requires AI models to have an emergency kill switch. Furthermore, it is forbidden for developers to implement extensive frontier AI models that present a predictable danger of harming or aiding in critical harm.
AI models need to go through independent audits carried out by outside auditors to guarantee compliance. Developers who violate the proposed requirements of the measure could face legal action brought by the attorney general of California. They also have to meet the safety requirements set forth by the ‘Frontier Model Division,’ a new AI certification organisation that the government of California intends to create.
What is the root of the bill’s controversy?
The draft law essentially echoes the worries raised by those who are pessimistic about AI. Leaders in the IT sector, like as Yoshua Bengio and Geoffrey Hinton, who support AI regulation because they are concerned about existential hazards associated with the technology, have endorsed it. The Centre for AI Safety, which is sponsoring the bill, stated in an open letter that the hazards associated with AI are comparable to those caused by pandemics or nuclear weapons.
Although these organisations support the measure, almost everyone else has strongly opposed it. The fact that the measure may essentially do away with open-source AI models is one of the main arguments against it.
Open source AI models allow users to easily access and modify its internal mechanisms, which increases security and transparency. The new California law, however, would deter businesses like Meta from releasing their AI models under public licence because of concerns about being held accountable for abuse by other programmers.
Additionally, experts have pointed out that keeping AI systems from malfunctioning is trickier than it first seems. Because of this, it might not be completely equitable to place all regulatory duty on AI businesses. This is especially true given that the safety requirements established in the bill might not be sufficiently flexible to keep up with the rapid advancement of AI technology.