The White House has announced a $140m investment in AI advancements that prioritize ethics, trustworthiness, responsibility, and public welfare
Prior to meeting with leading executives in the AI industry, including those from Google, Microsoft, and OpenAI, the US President and Vice President announced measures to address the potential risks associated with unregulated advancements in AI. The White House emphasized that companies responsible for developing such technology must prioritize safety before deployment or public release.
There is growing concern that if the development of AI is not regulated, private companies’ use of this technology could endanger jobs, increase the potential for fraud, and violate data privacy.
On Thursday, the US government announced its plan to invest $140m (£111m) in seven new national AI research institutes, with a focus on developing ethical, trustworthy, responsible, and public-serving AI advancements. While the private sector currently dominates AI development, with 32 significant machine-learning models produced by the tech industry last year compared to three produced by academia, top AI developers, including OpenAI, Google, Microsoft, and the UK’s Stability AI, have agreed to allow their systems to be publicly evaluated at the upcoming Defcon 31 cybersecurity conference.
According to the White House, the forthcoming public evaluation of AI systems will provide valuable information to both researchers and the general public regarding the impact of such models. During the meeting, President Biden, who has previously experimented with ChatGPT, emphasized the importance of mitigating the risks that AI poses to individuals, society, and national security. Vice President Harris acknowledged that generative AI, which includes products like ChatGPT and Stable Diffusion, presents both risks and opportunities, and stressed that the private sector has an ethical, moral, and legal obligation to ensure the safety and security of their products, as stated in a post-meeting statement.
On Thursday, the US government unveiled additional policies, including draft guidance from the President’s Office of Management and Budget on the use of AI in the public sector. Last October, the White House released a blueprint for an “AI bill of rights,” which proposed safeguarding individuals from unsafe or ineffective AI systems through pre-launch testing and ongoing monitoring, as well as protecting against abusive data practices like unchecked surveillance.
While Robert Weissman, the president of Public Citizen, a consumer rights non-profit organization, viewed the White House’s announcement as a positive step, he maintained that more assertive measures were necessary. Weissman suggested that a moratorium on the implementation of new generative AI technologies should be put in place.
According to an expert, it is essential to protect big tech firms from themselves. The companies and their leading AI developers are fully aware of the potential hazards associated with generative AI. However, they are engaged in a competitive arms race and believe they are unable to slow down. On Thursday, the UK’s competition regulator raised concerns about AI development and launched an investigation into the models that power products like ChatGPT and Google’s Bard chatbot. Additionally, a British computer scientist known as the “godfather of AI,” Dr. Geoffrey Hinton, resigned from Google to speak freely about the dangers of AI earlier this week.