Regulators, including the UK’s CMA, are concerned AI markets could be dominated by a few vertically integrated firms globally. But have they underestimated the impact of open source AI?
President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI Executive Order) directed the US National Telecommunications and Information Administration (NTIA) to review large AI models whose weights are widely available (open source) and develop policy recommendations to maximize those benefits while mitigating the risks. On 30 July 2024, the NTIA released its report which takes a cautious yet optimistic view that the benefits of open source AI will outweigh the risks, including promoting innovation and forestalling the emergence of an algorithmic monoculture and recommending an evidence-based evaluative approach to regulating AI models.
It's all in the weights
As the NTIA report explains, an AI model processes an input - such as a user prompt - into a corresponding output applying a series of numerical parameters that make up the model, known as the model’s weights. The values of these weights are determined by training the model with numerous examples: the more often a relationship between two or more words appears in the training data (captured in the weights), the more likely it is to be chosen in a response.
Some developers choose to treat model weights as proprietary (closed models: e.g. Open AI’s GPT) while other developers release the weights (open models: e.g. Meta’s llama series). Access to weights is not everything needed for DIY: also important are a model’s architecture, training procedures, the types of data (or modalities) processed by the model and the complexity of the tasks the model, is trained to perform.
But three things follow if a developer makes a model’s weights widely available:
A third party can customise outside the developer’s initial scope. Customisation techniques typically require significantly less technical knowledge, resources, and computing power than training a new model from scratch.
The developer gives up control over and visibility into its end users’ actions.
Users can perform computational inference (the process of running live data) using their computational resources, which may be on a local machine or bought from a cloud service. This localisability allows users to leverage models without sharing data with the model’s developers, which can be important for confidentiality and data protection (i.e. healthcare and finance industry).
Openness can be relative: the developer may give access to weights to the public generally or only to vetted researchers, and model sharing may be in stages to allow time for risks at one stage to become apparent before increasing access.
There is also a blurred line between open source and closed source model: developers of otherwise closed models can provide more transparency by providing model cards that describe a model’s technical details, intended uses, performance on evaluation and red-teaming, and data sheets that share the processes they used to train the model.
NTIA’s approach to assessing open source AI
The NTIA said the task set by the AI Executive Order required it to identify the marginal risks and benefits of open source AI: “the additional risks and benefits that widely available model weights introduce compared to those that come from non-open foundation models”. This meant the NTIA discounted from its analysis any risks and benefits arising equally from both open source and closed source dual-use foundation models.
However, there were important nuances in the NTIA’s approach:
AI, whether open or closed, raises more risks and benefits than previous generations of technology.
Open foundation models will increase both benefits and risks posed by foundation models, simply because they are open. Closed models also carry their unique benefits and risks. So, justifying regulation required identifying the harms/benefits of specific uses.
Once model weights have been widely released, it is difficult to un-release them. This means once a specific harm of open source models is identified, the regulatory response has to be quick, if not pre-emptive.
The horse has bolted in respect of currently released open source models, and any regulatory response will be most effective on models that have not yet been widely released. But in turn, raises the difficulty of trying to predict future developments in the capabilities of open source AI and AI generally. As an example of just how hard this can be, the NTIA notes while the original proponents of the Internet said it would be “a place to collaboratively come together, those proponents did not anticipate that this connection could, ironically, also lead to loneliness”.
Impact on safety
The NTIA says:
Dual-use foundation models with widely available model weights could plausibly exacerbate the risks AI models pose to public safety by allowing a wider range of actors, including irresponsible and malicious users, to leverage the existing capabilities of these models and augment them to create more dangerous systems. even if the original model has built-in safeguards to prohibit certain prompts that may harm public safety, such as content filters, blocklists, and prompt shields, direct model weight access can allow individuals to strip these safety features.
While users can circumvent guardrails in closed models (called jail-breaking), service providers can better mitigate these risks because closed source models tend to be accessed through APIs, such as by monitoring data and instructions sent to a model.
But when looking at specific risks, the NTIA said it was (as yet) unclear the extent of the marginal risk actually posed by open source models:
Making biological and chemical weapons: biological design tools (BDT ) are AI tools and methods that enable the design and understanding of biological processes (e.g. DNA sequences/synthesis or the design of novel organisms). Frighteningly, effective BDTs require only hundreds of millions of parameters, compared to the billions of parameters in the typical LLMs. However, the NTIA concluded, “the risk delta between jailbreaking future closed models for [chemical and biological weapons] content and augmenting open models is unclear”.
Cyberattacks: open source AI could lower barriers to malicious actors automatically generating malware attacks and develop more sophisticated malware, such as viruses, ransomware, and Trojans. However, the NTIA noted:
For years, tools and exploits have become more readily accessible to lower-resourced adversaries, suggesting that foundation models may not drastically change the state of cybersecurity, but rather represent a continuation of existing trends. Hackers may not want to invest time, energy, and resources into leveraging these models to update their existing techniques and tools.
Political deep fakes/misinformation: the NTIA observed there is evidence that open foundation models, including LLMs, are already being used today to create disinformation-related political content. Further, most of the open models capable of producing political deepfakes today have fewer than 10 billion parameters, which means they are relatively small and therefore more easily built and proliferated by smaller players. But the NTIA again considered while deepfakes are a concern, “current dual-use foundation models with widely available model weights may not substantially exacerbate their creation or inflict major societal damage given the existing ability to create deepfakes using closed models”.
The NTIA had more tangible concerns with AI-generated child sexual abuse material (CSAM) and AI-generated non-consensual intimate imagery (NCII). While the Taylor Swift NCII images were generated by a closed AI model, the NTIA noted research showing a significant increase in AI-generated NCII and CSAM, more readily enabled by downstream apps developed from open source models, such as ‘nudifying’ apps into which a photo of a clothed person can be loaded and the clothes stripped off.
The NTIA also observed the harm from this AI-generated content reaches beyond the individuals impacted:
Proliferation of CSAM and NCII can discredit and undermine women leaders, journalists, and human rights defenders, and the implications of this harm extend beyond the individual to society and democracy at-large.
However, the NTIA also noted the harm caused by any malevolent or misleading content, whether generated by open source or closed source AI, depends on the ability to distribute, which means the best place to attack such content may be on the platforms through which access is provided to that content.
Some researchers argue that the bottleneck for successful disinformation operations is not the cost of creating it. Because the success of disinformation campaigns is dependent on effective distribution, the key to evaluating marginal risk is whether the potential increased volume alone is an important factor, such that it may overwhelm the gatekeepers on platforms and other distribution venues.
The NTIA weighed against these uncertain marginal risks the following benefits of open source AI to public safety:
Open foundation models allow a broader range of actors to examine and scrutinise models to identify potential vulnerabilities and implement safety measures and patches.
Just as open source models can be used to build attack vectors, they can also be used to build at lower cost and more quickly cyber-defence toolkits and create open-source channels for collaboration on cyber-defence.
Widely available model weights more readily allow neutral third-party entities to assess systems, perform audits and validate internal developer safety checks.
Building guardrails for general purpose AI models can be challenging because the developer has to predict the range of possible uses by downstream developers and users. However, a downstream developer or user building from an open model for specific purposes can “narrow and concretize this task and add on more targeted and effective safety training, testing, and guardrails”. For example, an online therapy chatbot from an open source model can add specific content filters for harmful mental health content, whereas a general-purpose developer may not consider all the possible ways an LLM could produce negative mental health information.
Impact on competition
The NTIA kicks off its analysis of the impact of open source models on AI competition by observing that AI technology carries inherent risks to competition:
Training an advanced AI model requires a vast amount of resources, including financial resources, but once trained, the model can be reproduced at a much lower cost. 'Vertical' markets of information goods like this reduce competition and lead to the dominance of a small number of companies. Successful companies can use their resources to produce higher quality products, driving out competitors and gaining even more market control.
But the NTIA has a mixed view of the impact of open source models on upstream competition (i.e. between AI models):
Open model weights are unlikely to substantially impact the advanced foundation model industry, given constraints such as access to compute and other resources; However, even with just a few foundation models in the ecosystem, downstream applications may generally become more competitive.
The NTIA then goes on to say that even if the competitive impacts of open source are limited, open source could, by increasing “the amount of models available to create downstream products”, disrupt an algorithmic monoculture in which the AI ecosystem comes to rely on one or a few foundation models for a vast range of downstream applications and diverse use cases: “this homogeneity throughout the ecosystem could lead to technological risks, such as black boxing, methodological uniformity, and systemic failures, and societal concerns, including persistent exclusion or downgrading of certain communities, centralized cultural power, and further marginalisation of underrepresented perspectives”.
The explanation for these contradictory views would seem to be that open access to weights “may create the perception of more competition [but] the benefits seem more likely to be realized or maximized when additional conditions are in place to permit their full utilization”, including training data, model architecture, and related resources, including compute, talent/labour, funding for research. As a previous article discussed, the UK’s CMA, at least in its early papers, did not see particular control or preferred position in relation to many of these resources.
The NTIA more clearly saw the beneficial impact of open source AI in promoting competition in downstream markets:
[d]ual-use foundation models with widely available model weights provide a building block for a variety of downstream uses and seem likely to foster greater participation by diverse actors along the AI supply chain. Start-ups can leverage these models in a variety of wrapper systems, such as chatbots, search engines, generative customer service tools, automated legal analysis, and more.
Taken together, these upstream and downstream impacts of open source AI models would seem to suggest a weaker vertical integration risk than regulators such as the CMA fear.
Impact on the global AI arms race
The NTIA observed, “[m]any US open foundation models are more advanced than most closed models developed in other nations”. The risk is that US open source models could help countries of concern (which seems a reference to China) create advanced technological ecosystems they may not have been able to build otherwise, or which would take longer and require more resources. This could imperil US national security bypassing US technology controls (For example, use open source AI to design chips that are prohibited from direct export) and undermine US competitive advantages in AI.
On the other hand, the NTIA also says foreign adoption of US origin open models would promote "the development of a global technology ecosystem around US open models rather than competitors’ open models”. Softening the edge of US economic dominance, the NTIA adds “widespread use of US open models would promote the United States ability to set global norms for AI, bolstering our ability to foster global AI tools that promote the enjoyment of human rights”.
Possible policy approaches
The pros and cons of the options for regulation of open source AI considered by the NTIA are summarised in the table below:
Recommendations
The NTIA recommended Option 2 (monitoring) because “there is not sufficient evidence on the marginal risks of dual-use foundation models with widely available model weights to conclude restrictions on model weights are currently appropriate, nor that restrictions will never be appropriate in the future”.
The NTIA recommended a three-step approach:
Step 1: collect evidence. This will require the development of standardised reporting tools, as well as ongoing research by administrative agencies: for example, the US Copyright Office is currently undertaking a comprehensive review of the interplay between the ‘fair use’ doctrine and the use of copyrighted works without permission from the rights holder to train AI models. The NTIA identified the key challenge in monitoring to be developing leading indicators to predict the risks, benefits, and capabilities that open-weight foundation models will - but do not currently - possess. One indicator might be the length of time between when leading closed models achieve new capabilities and when open-weight models achieve those same capabilities, but the NTIA cautions that open source AI could develop its own technological and market dynamics independent of closed source models. The NTIA said reporting may need to apply to models below the Executive Order’s 10 million parameter threshold because, since the making of the Order, technological advances mean that smaller models can match the capabilities of models over that threshold.
Step 2: evaluate evidence. To meet the challenge that open source models cannot be un-released, agencies should be in a position to act quickly by pre-determining a portfolio of ready-set-go risk cases which specify (i) one or more leading indicators of that risk being at hand (ii) thresholds for each indicator to identify materiality and the closeness of the risk, and (iii) a set of potential policy responses that could mitigate the risk.
Step 3: Act on evaluations. The NTIA characterised banning access to weights as the last resort, and agencies should search for less intrusive regulatory responses.
For example, structured access for researchers but with hosted access for the public. The more effective mitigant may be downstream from the model weights, such as better platform content moderation. Strikingly for a US Government paper, the NTIA also recommends a preference for international solutions because “the effectiveness of restrictions on the distribution of model weights depends in significant part on international alignment on the appropriate scope and nature of those restrictions”.
Read more: NTIA Supports Open Models to Promote AI Innovation
Peter Waters
Consultant