On Monday, the U.S. Commerce Department released a report endorsing “open-weight” generative AI models like Meta’s Llama 3.1, while also recommending that the government develop new capabilities to monitor these models for potential risks.
The report, authored by the National Telecommunications and Information Administration (NTIA), highlights that open-weight models expand generative AI’s accessibility to small companies, researchers, nonprofits, and individual developers. Therefore, the report suggests that the government should not impose restrictions on access to open models without first investigating whether such restrictions might negatively impact the market.
This perspective aligns with recent comments from FTC Commission Chair Lina Khan, who believes that open models can enable more small players to bring their ideas to market, thereby fostering healthy competition.
Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator, stated, “The openness of the largest and most powerful AI systems will affect competition, innovation, and risks in these revolutionary tools. NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”
The report arrives as regulators both in the U.S. and internationally are considering rules that could impose new requirements or restrictions on companies releasing open-weight models.
In California, bill SB 1047 is nearing passage. This bill would require companies training models with more than 1026 FLOP of compute power to enhance their cybersecurity measures and develop a method to “shut down” copies of the model under their control. Meanwhile, the EU has finalized compliance deadlines under its AI Act, which introduces new rules on copyright, transparency, and AI applications.
Meta has indicated that the EU’s AI policies might prevent it from releasing some open models in the future. Additionally, several startups and major tech companies have opposed California’s law, arguing that it is too burdensome.
The NTIA’s approach to model governance is not entirely hands-off. The report recommends that the government establish a continuous program to gather evidence on the risks and benefits of open models, assess this evidence, and take action based on these assessments, including imposing certain restrictions if necessary. Specifically, the report suggests that the government should research the safety of various AI models, support risk mitigation research, and develop thresholds for “risk-specific” indicators to determine if policy changes are needed.
These steps align with President Joe Biden’s executive order on AI, as noted by U.S. Secretary of Commerce Gina Raimondo. The order calls for new standards around the creation, deployment, and use of AI by government agencies and companies.
“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” Raimondo said in a press release. “Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”