The US Department of Commerce’s Bureau of Industry and Security (BIS) has proposed new mandatory reporting requirements for developers of advanced AI models and cloud service providers. These rules will require companies to report on development activities, cybersecurity measures, and results from red-teaming tests, which assess risks such as AI systems aiding in cyberattacks or enabling the creation of dangerous weapons. Secretary of Commerce Gina M.
Raimondo stated, “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
The proposed regulations follow a pilot survey by the BIS earlier this year and come amid global efforts to regulate AI. For enterprises, these regulations could increase costs and slow down operations. Charlie Dai, VP and Principal Analyst at Forrester, said, “Enterprises will need to invest in additional resources to meet the new compliance requirements, such as expanding compliance workforces, implementing new reporting systems, and possibly undergoing regular audits.” Companies may need to modify their processes to gather and report the required data, potentially leading to changes in AI governance, data management practices, cybersecurity measures, and internal reporting protocols, Dai added.
The extent of BIS’ actions based on the reporting remains uncertain, but the agency has previously played a key role in preventing software vulnerabilities from entering the US and restricting the export of critical semiconductor hardware.
Mandatory AI reporting requirements
Suseel Menon, Practice Director at Everest Group, noted, “Determining the impact of such reporting will take time and further clarity on the extent of reporting required.”
Beyond concerns of costs, there is also a potential impact on innovation.
Swapnil Shende, Associate Research Manager at IDC, said, “The proposed AI reporting requirements seek to bolster safety but risk stifling innovation. Striking a balance is crucial to nurture both compliance and creativity in the evolving AI landscape.”
This development follows California’s recent passage of SB 1047, which could set the toughest AI regulations in the US. The tech industry had pushed back against SB 1047, with over 74% of companies expressing opposition.
Major firms like Google and Meta have raised concerns that the bill could create a restrictive regulatory environment and stifle AI innovation. Menon added, “High regulatory barriers tend to stifle innovation, which is why the US has historically favored looser regulations compared to the EU. Complex regulations could draw innovative projects and talent out of certain regions, much like tax havens do for economic activity.”
While the US aims to enhance national security through stringent AI regulations, balancing these measures with the need to foster innovation will be a critical challenge.