OpenAI CEO, Sam Altman, testified before Congress, emphasizing the need for government intervention to mitigate the risks associated with increasingly powerful AI systems. Altman proposed the establishment of a licensing agency, either at a national or global level, that would regulate and monitor the most advanced AI systems to ensure compliance with safety standards.
The concerns surrounding AI’s potential impact on society, ranging from job displacement to the spread of falsehoods, have prompted discussions among tech CEOs and U.S. agencies about the need for regulation. While Congress has yet to formulate comprehensive AI rules like their European counterparts, the hearing marked an important step in addressing these concerns.
During the hearing, Senator Richard Blumenthal highlighted the necessity for AI companies to test their systems, disclose known risks, and address the potential destabilization of the job market by future AI systems. Altman, though cautious about expressing his own fears about AI, suggested that a regulatory agency should implement safeguards to prevent AI models from self-replicating and exerting undue control over humans.
OpenAI, founded in 2015 with a focus on safety, has gained attention with its AI products, including ChatGPT and DALL-E, receiving substantial investment from companies like Microsoft.
Altman plans to embark on a worldwide tour to engage with policymakers and the public on AI-related matters. In addition to Altman, IBM’s chief privacy and trust officer, Christina Montgomery, and NYU professor emeritus Gary Marcus also testified.
Marcus was part of a group of AI experts who called for a six-month pause in the development of powerful AI models, aiming to provide more time for society to consider the associated risks.
While the hearing marked a critical step towards determining Congress’s course of action, tech industry leaders have expressed their support for AI oversight while cautioning against overly burdensome regulations. IBM’s Montgomery suggested a “precision regulation” approach, focusing on specific use-cases rather than broad regulation of the technology itself.