Current and former employees sign letter cautioning about insufficient safety oversight and advocating for enhanced protections for whistleblowers
On Tuesday, a group of current and former employees from leading artificial intelligence companies published an open letter. The letter highlighted the industry’s insufficient safety oversight and urged for stronger protections for whistleblowers.
This letter, advocating for a “right to warn about artificial intelligence,” represents one of the most public expressions of concern about AI’s dangers from industry employees, who typically operate within a secretive environment. Eleven current and former OpenAI employees signed the letter, along with two current or former Google DeepMind employees, one of whom had previously worked at Anthropic.
The letter states, “AI companies have significant non-public information regarding their systems’ capabilities and limitations, the effectiveness of their protective measures, and the varying levels of risk for different types of harm. However, they are currently only minimally required to share some of this information with governments and not at all with civil society. We believe they cannot all be trusted to share this information voluntarily.”
OpenAI defended its practices in a statement, citing avenues like a tipline for reporting issues within the company. They also emphasized that new technology is not released until appropriate safeguards are in place. Google did not immediately respond to a request for comment.
An OpenAI spokesperson stated, “We’re proud of our track record in providing the most capable and safest AI systems. We believe in our scientific approach to addressing risks. We acknowledge the importance of rigorous debate due to the significance of this technology, and we will continue engaging with governments, civil society, and other communities worldwide.”
Concerns about the potential dangers of artificial intelligence have been around for decades. However, the recent AI boom has heightened these concerns, leading regulators to struggle to keep up with technological advancements. While AI companies have publicly expressed their dedication to developing the technology safely, researchers and employees have cautioned about a lack of oversight. They fear that AI tools may worsen existing social problems or even create entirely new ones.
The letter, originally reported by the New York Times and signed by current and former employees of AI companies, demands greater safeguards for workers at advanced AI firms who choose to raise safety issues. It requests adherence to four principles regarding transparency and accountability. These include a pledge that companies will not compel employees to sign non-disparagement agreements that prevent the discussion of AI-related risks and the establishment of a mechanism for employees to anonymously voice concerns to board members.
The letter asserts, “As long as there is no effective government oversight of these corporations, current and former employees are among the few individuals who can hold them accountable to the public. However, extensive confidentiality agreements prevent us from expressing our concerns openly, except to the very companies that may be neglecting these issues.”
Companies like OpenAI have also employed aggressive measures to prevent employees from freely discussing their work. According to a recent Vox report, OpenAI required departing employees to sign highly restrictive non-disparagement and non-disclosure agreements or forfeit all vested equity. In response to the report, Sam Altman, CEO of OpenAI, issued an apology and pledged to revise the off-boarding procedures.
The letter follows the resignation of two senior OpenAI staff members, co-founder Ilya Sutskever and key safety researcher Jan Leike, who left the company last month. Following his departure, Leike claimed that OpenAI had prioritized “shiny products” over a safety-focused culture.
The open letter issued on Tuesday echoed some of Leike’s sentiments, suggesting that companies were not demonstrating an obligation to be transparent about their practices.