Illustration of a robotic finger touching a human finger. /CFP
A group of current and former employees at artificial intelligence companies, including Microsoft-backed OpenAI and Alphabet's Google DeepMind, on Tuesday raised concerns about risks posed by the emerging technology.
Eleven present and former employees of OpenAI and one current and one former employee of Google DeepMind said in an open letter that the profit-driven nature of AI businesses impedes effective oversight.
"We do not believe bespoke structures of corporate governance are sufficient to change this," the letter added.
It further warns of risks from unregulated AI, ranging from the spread of misinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in "human extinction."
Researchers have found examples of image generators from companies including OpenAI and Microsoft producing photos with voting-related disinformation, despite policies against such content.
AI companies have "weak obligations" to share information with governments about the capabilities and limitations of their systems, the letter said, adding that these firms cannot be relied upon to share that information voluntarily.
The open letter is the latest to raise safety concerns around generative AI technology, which can quickly and cheaply produce human-like text, imagery and audio.
The group has urged AI firms to facilitate a process for current and former employees to raise risk-related concerns and not enforce confidentiality agreements that prohibit criticism.
Separately, OpenAI said on Thursday it disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet.