U.S. Financial Oversight Group Raises Alarms Over AI Risks

The Financial Stability Oversight Council (FSOC), chaired by Treasury Secretary Janet Yellen and comprising leading financial regulators, expressed concerns in its recent annual report about the escalating risks associated with the hasty integration of artificial intelligence (AI) in the U.S. financial sector. The report, released on December 14, marked the first instance where FSOC has explicitly pointed out the perils posed by AI in its yearly review of financial stability.

While acknowledging the potential of AI to enhance innovation and efficiency within financial firms, FSOC underscored the importance of vigilant supervision due to rapid technological progress. The report drew attention to specific dangers such as cybersecurity and the complexity of AI models. It recommended that both corporations and regulatory bodies upgrade their understanding and abilities to effectively track AI developments and recognize nascent risks.

The complexity and technical nature of certain AI applications were noted as significant challenges for institutions in terms of effectively monitoring or explaining these technologies. The report cautioned that without a thorough comprehension, there’s a risk of missing biased or inaccurate outcomes.

Additionally, the report highlighted concerns about AI’s increasing reliance on large external datasets and third-party services, which raises issues of privacy and cybersecurity.

In the realm of regulatory oversight, agencies like the U.S. Securities and Exchange Commission, a member of the FSOC, have been scrutinizing firms’ use of AI. The White House has also stepped in with an executive order aimed at mitigating AI risks.

Notably, Pope Francis, in a letter dated December 8, voiced his apprehension regarding AI’s potential threats to humanity. He advocated for a global agreement to ethically govern AI development, warning of the dangers of a “technological dictatorship” without proper regulation.

Prominent tech personalities, including Elon Musk and Steve Wozniak, have also expressed their concerns about the rapid advancement of AI technologies. In March 2023, they were among the 2,600+ tech leaders and researchers who signed a petition calling for a temporary halt in AI development, highlighting the serious societal and human risks posed by the progress of AI technologies beyond GPT-4.