Framework for Artificial Intelligence Diffusion
(federalregister.gov)147 points by chriskanan a day ago
147 points by chriskanan a day ago
"First, it assumes that scaling models automatically leads to something dangerous"
The regulation doesn't exactly make this assumption. Not only are large models stifled, the ability to serve models via API to many users, and the ability to have many researchers working in parallel on upgrading the model is also stifled. It wholesale stifles AI progress for the targeted nations.This is an appropriate restriction on what will likely be a core part of military technology in the coming decade (eg drone piloting).
Look, if Russia didn't invade Ukraine and China didn't keep saying they wanted to invade Taiwan, I wouldn't have any issues with sending them millions of Blackwell chips. But that's not the world we live in. Unfortunately, this is the foreign policy reality that exists outside of the tech bubble we live in. If China ever wants to drop their ambitions over Taiwan then the export restrictions should be dropped, but not a moment sooner.
Israel is a known industrial espionage threat to the us, how'd you think they got nuclear weapons? some analysts say they're the largest threat after china. Not to mention theyre currently using ai in targeting systems while under investigation for war crimes.
It could be related to 14eyes with modifications (finland and ireland, plus close asian allies).
https://res.cloudinary.com/dbulfrlrz/images/w_1024,h_661,c_s... (from https://protonvpn.com/blog/5-eyes-global-surveillance).
Israel, Poland, Portugal and Switzerland are also missing from it
> Switzerland? Israel?
I hope someone with a better understanding of the details can jump in, but they are both Tier 2 (not Tier 3) restricted, so maybe there are some available loopholes or Presidential override authority or something. Also I believe they can still access uncapped compute if they go via data centers built in the US.
Limiting US GPU exports to unaligned countries is completely counterproductive as it creates a market in those countries for Chinese GPUs, accelerating their development even faster. Because a mediocre Huawei GPU is better than no GPU. And it harms the revenue of US-aligned GPU companies, slowing their development.
> Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk.
I'm disinclined to let that be a barrier to regulation, especially of the export-control variety. It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.
> Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.
How to you envision that working, specifically? Especially when a lot of models are pretty general and not very application-specific?
<< It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.
Am I missing something? I am not an expert in the field, but from where I sit, there literally is no barn door at this point to even close too late..
> First, it assumes that scaling models automatically leads to something dangerous.
The impression I had is with reversed causation: that it can't be all that dangerous if it's smaller than this.
Assuming this alternative interpretation is correct, the idea may still be flawed, for the same reasons you say.
I also suspect that the only real leverage the U.S. has is on big compute (i.e. requires the best chips), and less capable chips are not as controllable.
While your critiques most likely have some validity (and I am not positioned to judge their validity), you failed to offer a concrete policy alternative. The rules were undoubtedly made with substantial engagement from the industry and academic researchers, as there is too much at stake for them not to engage, and vigorously. Likely there were no perfect policy solutions, but decided to not let the perfect stop the good enough since timeliness matters as much or more than the policy specifics.
Doing nothing is a better alternative, because these restrictions will just encourage neutral countries to purchase Chinese GPUs, because their access to US GPUs is limited by these regulations. This will accelerate the growth of Chinese GPU companies and slow the growth of US-aligned ones; it's basically equivalent to the majority of nations in the world placing sanctions on NVidia.
The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations. While there may be some errors in my math, I think that corresponds to training a model with over 70,000 H100s for a month.
I personally think the regulation is misguided, as it assumes we won't identify better algorithms/architectures. There is no reason to assume that the level of compute leads to these problems.
Moreover, given the emphasis on test-time compute nowadays and that it seems like a lot of companies have hit a wall with performance gains with trying to scale LLMs at train-time, I especially think this regulation isn't especially meaningful.
Traditional export control applied to advanced hardware is because the US doesn't want its adversaries to have access to things that erode the US military advantage. But most hardware is only controlled at the high-end of the market. Once a technology is commodotized, the low-end stuff is usually widely proliferated. Night vision goggles are an example, only the latest generation technology is controlled, and low-end stuff can be bought online and shipped worldwide.
Applying this to your thoughts about AI, is that as the efficiency of training gets better, the ability to train models is commodotized, and those models would not be considered to be advantageous and would not need to be controlled. So maybe setting the export control based on the number of operations is a good idea- it naturally allows efficiently trained models to be exported since they wouldn't be hard to train in other countries anyway.
As computing power scales maybe the 10^26 limit will need to be revised, but setting the limit based on the scale of the training is a good idea since it is actually measurable. You couldn't realistically set the limit based on the capability of the model since benchmarks seem become irrelevant every few months due to contamination.
I wonder what makes people believe that the US currently enjoys any kind of a meaningful "military advantage" over e.g. China? After failing to defeat the Taliban and running from the Houthis especially. This seems like a very dangerous belief to have. China has 4x the population and outproduces us 10:1 in widgets (2:1 in dollars). Considering just e.g. steel, China produces about 1 billion metric tons of it per year. We produce 80 million tons. Concrete? 2.4B tons vs 96M tons. 70+% of the world's electronics. Their shipbuilding industry is 230x more productive (not a typo). Etc, etc.
The short term profits US businesses have been enjoying over the past 25 years came at a staggering long term cost. The sanctions won't even slow down the Chinese MIC, and in the long run they will cause them to develop their own high end silicon sector (obviating the need for our own worldwide). They're already at 7nm, at a low yield. That is more than sufficient for their MIC, including the AI chips used there, currently and in the foreseeable future.
a) just because the government has policies that doesn’t mean they are 100% effective
b) export controls aren’t expected to completely prevent a country from gaining access to a technology, just make it take longer and require more resources to achieve
You may also be misunderstanding how much money China will spend to develop their semiconductor industry. Sure, they will eventually catch up to the West, but the money they spend along the way won’t be spent on fighter jet, missiles, and ships. It’s still preferable (from the US perspective) to having no export controls and China being able to import semiconductor designs, manufacturing hardware, and AI models trained using US resources. At least this way China is a few months behind and will have to spend a few billion Yuan to achieve it.
Of course. They're mitigations, not preventions. Few defenses are truly preventative. The point is to make it difficult. They know bad actors will try to circumvent it.
This isn't lost on the authors. It is explicitly recognized in the document:
> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.
This. We put toasters on the internet and are no longer surprised, when services we use send us breach notices at regular intervals. The only thing this regulation would do, as written, is add an interesting choke point for compliance regulators to obsess over.
>The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations.
Does this affect open source? If so, it'll be absolutely disastrous for the US in the longer term, as eventually China will be able to train open weights models with more than that many operations, and everyone using open weights models will switch to Chinese models because they're not artificially gimped like the US-aligned ones. China already has the best open weights models currently available, and regulation like this will just further their advantage.
This appears to be a very shallow take and lazy argument that does not capture even basic nuance of the issue at hand. For the sake of expanding it a little and hopefully moving it in the right direction, I will point out that BIS framework discusses use of advanced models as dual use goods ( ie. not automatically automatic weapons ).
edit(removed exasparated sigh; it does not add anything )
We can't let perfect be the enemy of good, regulations can be updated. Capping FLOPs is a decent starter reg.
Counterpoint would be the $7.25 minimum wage. It can be updated, but politicians aren't good at doing that. In both cases (FLOPS and minimum wage), at least a lower bound for inflation should be included:
Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.
> Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.
Why would you want to automatically increase the cap algorithmically like that?
The purpose of a regulation like this is totally different than the minimum wage. If the point is to keep and adversary behind, you want them to stay as far behind as you can manage for as long as possible.
So if you increase the cap, you want only increase it when it won't help the adversary (because they have alternatives, for instance).
This smells a lot like the misguided crypto export laws in the 90s that hampered browser security for years.
And don't forget the amazing workaround Zimmerman of PGP fame came up with - the source code in printed form was protected 1A speech, so it was published, distributed, and then scanned and OCR'd outside the US - https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...
I hope this time we finally get a Supreme Court ruling that export controls on code are unconstitutional, instead of the feds chickening out like last time
Using 2d barcodes you can fit ~20MB per page. Front and back you could probably fit a model that violated the rule on less than a thousand pages.
Edit: maybe 10k pages
It hampered the security of a lot of things. That wasn't misguided -- that was the point.
China, Russia, and Iran used Internet Explorer too.
It’s worth noting that this splits countries into three levels - first without restrictions, second with medium restrictions, third with harsh restrictions.
And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).
I see the return of cold war computing model, where many countries had their own computer platforms and programming languages.
Which apparently might be a good outcome to FOSS operating systems, with national distributions like Kylin.
As European I vote for SuSE.
This smells about as well informed as the genius move that forced Ukrainian owner, Max Polyakov, to divest from Firefly Aerospace. A US government position that was widely derided by space industry watchers, and has now been reversed.
This might be a product of the USA being a gerontocracy.
Someone still sees Eastern Europe as a provider of cheap brainpower. This is insulting.
>> And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).
> Yeah, this is really a bit insulting.
So you're insulted some country of other wasn't included in:
> First, this rule creates an exception in new § 740.27 for all transactions involving certain types of end users in certain low-risk destinations. Specifically, these are destinations in which: (1) the government has implemented measures to prevent diversion of advanced technologies, and (2) there is an ecosystem that will enable and encourage firms to use advanced AI models to advance the common national security and foreign policy interests of the United States and its allies and partners.
?
IMHO, it's silly to get insulted over something like that. Your feelings are not a priority for an export control law.
Taiwan, even though it's a US ally, is only allowed limited access to certain sensitive US technology it deploys (IIRC, something about Patriot Missile seeker heads, for instance), because their military is full of PRC spies (e.g. https://www.smh.com.au/technology/chinese-spies-target-taiwa...), so if they had access the system would likely be compromised. It's as simple as that.
We're sorry, an error has occurred A general error occurred while processing your request.
Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.
If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.
On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.
What do the regulators writing this intend for this to slow down/stop?
I can't seem to find any information about that anywhere.
Obviously to prevent proliferation of dual-use technologies to potentially adversarial actors. The same intent behind restricting high-fidelity infrared camera and phased radar equipment.
China is leading AI race with their open source deepseek-v3. It is laughable to think they this regulation with stop them. USA should actually collaborate, not isolate.
China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example, to get access to as many compute as they want. or to use consumer grade compute, or their homegrown Chinese CPUs/GPUs.
USA should actually embrace open source and collaborate together, as we are still in the very beginning of AI revolution
The entire point is to not collaborate, because this tech is being used for military purposes. The US wants to throw up roadblocks to make it more difficult. Obviously, against a foreign military, anything is a mitigation and not a prevention.
> China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example
That's why they're also moving towards KYC for cloud providers.
https://www.federalregister.gov/documents/2024/01/29/2024-01...
Maybe they intend for it to speed up/start implementation of federal agencies and regulations. The intent is to exert control over an emerging market while it’s still comprised of cooperative participants. Regulators want to define the regulatory frameworks rather than relying on self-policing before it’s “too late.”
Let’s see if this survives the next administration. Normally I’d be skeptical, but Musk has openly advocated about the “dangers” of AI and will likely embrace attempts to regulate it, especially since he’s in a position to benefit from the regulatory capture. In fact he’s doubly well-placed to take advantage of it. Regardless of his politics, xAI is a market leader and would already be naturally predisposed to participate in regulatory capture. But now he also enjoys unprecedented influence over policymaking (Mar a Lago) and regulatory reform (DOGE). It’s hard to see how he wouldn’t capitalize on that position.
> Regardless of his politics, xAI is a market leader
Lol what?
The only people who think this are Elon fanboys.
I guess you think Tesla is the self-driving market leader, too. Okay.
I don’t even use it. But in terms of funding, it’s in the top 5, according to Crunchbase data [0].
[0] https://news.crunchbase.com/ai/startup-billion-dollar-fundra...
Hard to take comments like this seriously when you can’t even be bothered to be associated with it from your primary account.
Only in some diplomatic contexts.
The compute limit is dead on arrival, because models are becoming more capable with less training anyways. (See DeepSeek, Phi-4)
Strong opposition to this regulation seems to be one of the main things that led a16z, Oracle, etc. to go all in for Donald Trump. It's interesting that Meta too fought the regulation by its unprecedented open sourcing of model weights.
Regardless of who is currently in the lead, China has its own GPUs and a lot of very smart people figuring out algorithmic and model design optimizations, so China will likely be in the lead more obviously within 1-2 years, both in hardware and model design.
This law is likely not going to be effective in its intended purpose, and it will prevent peaceful collaboration between US and Chinese firms, the kind that helps prevent war.
The US is moving toward a system where government controls and throttles technology and picks winners. We should all fight to stop this.
> The US is moving toward a system where government controls and throttles technology and picks winners
What else can it do? They don’t want to lose their lead, and whatever restrictions they’ve been putting on China et al. have let the exact desired outcomes so far. The idea is to try to slow down the beast that has very set goals (e.g. to become high tech manufacturing and innovation center), and try to play catch up (like on-shoring some manufacturing).
Personally, I’m skeptical that it will work, because by raw number of hands on deck, they have the advantage. And it’s fairly hard when your institutional knowledge of doing big things is a bit outdated. I would argue, a good bet in North America would be finding a financially engineered solution to get Asian companies bring their workers and knowledge to ramp us up. Kinda like the TSMC factory. Basically the same thing as China did in 2000s with western companies.
> They don’t want to lose their lead, and whatever restrictions they’ve been putting on China et al. have let the exact desired outcomes so far.
They absolutely have not. The best open weights LLM is Chinese (and it's competitive with the leading US closed source ones), and around 10x cheaper both to train and to serve than its western competitors. This innovation in efficiency was largely brought about by US sanctions limiting GPU availability.
> The US is moving toward a system where government controls and throttles technology and picks winners.
Moving towards? The US has a pretty solid history of doing a great deal of this (and more) in the 20th century. But so did all of the world's powers... as they all continue to do today. It seems to be an inherent part of being a world power.
I agree this law won’t be effective in its intended purpose, and that China will develop models of their own that are sufficiently competitive (as we’ve already seen). However, I think seeking “peaceful collaboration” between the US (or Europe or many others) and China - either between governments or private firms - is a naive strategy that will simply lead to the US being replaced by a more dangerous superpower that does not respect the values of free and democratic societies.
I also think that to a great extent, we’re already at war. China has not respected intellectual property rights, conducted espionage against both companies and government agencies, repeatedly performed successful cyberattacks, helped Russia in the Ukraine conflict, severed telecommunications cables, and more. They’ve also built up the world’s largest navy, expanded their nuclear arsenal, and are working on projects to undermine the status of the US Dollar. All of this should have been met with a much stronger and forceful reaction, since clearly it does not fit into the notion of “peaceful collaboration”.
China’s unpeaceful actions aren’t limited to the West. China annexed much of its current territory illegally and through force (see Xinjiang and Tibet). When Hong Kong was handed back, it was under a treaty that China now says is not valid. China has been trying to steal territory from neighboring countries repeatedly, for example with Bhutan or India. They’ve also threatened to take over Taiwan many times now, and may do so soon. They’re about to build a dam that will prevent water from reaching Bangladesh and force them to become subjugated. The only peaceful and just outcome is for those territories to be freed from the control of China - which will require help from the West (sanctions, tariffs, blockades, and maybe even direct intervention).
Even within China, the CCP rules with an iron fist and violates virtually all principles of free societies and classically liberal values that we value in the West. I don’t see that changing. And if it doesn’t, how can they be trusted with more economic and military power? That’s why I don’t think we should seek peaceful collaboration with China. We just need smarter strategies than this hasty AI declaration.
You think China is harmless? Go tell that to Tibet, India, or Taiwan.
I am not sure why Sikkim is relevant. I am not familiar with it, so I read about it now. And it looks like India’s prime minister actually pushed a resolution through noting that Sikkim is independent. But then Sikkim had a domestic movement to join India and voluntarily did that. This seems like the opposite of annexation.
Regarding floods - if Bangladesh wants an upstream dam, why aren’t they included as a decision maker on whether the Chinese dam goes ahead? Clearly this is because they would say no to it. The issue isn’t floods - it’s that China can withhold water for drinking and irrigation and threaten the country with starvation and famine. It’s a huge national security threat.
I don't think you're wrong but Big Tech is bending the knee to Trump because he will be picking the winners.
More information here:
https://www.federalregister.gov/documents/2025/01/15/2025-00...
Related:
WH Executive Order Affecting Chips and AI Models
This feels like déjà vu from the crypto wars (1990s). If that experience helps, it is impossible to repress knowledge without violence, and it motivates more people to hack the system. Good times ahead "PGP released its source code as a book to get around US export law" <https://news.ycombinator.com/item?id=7885238>
Not the same situation at all. PGP would run on any computer you happened to have around. The source code was small enough to fit in a book. The people who already had the code wanted to release it. Lots of people could have rewritten it relatively quickly.
The ML stuff they're worried about takes a giant data center to train and an unusually beefy computer even to run. The weights are enormous and the training data even more enormous. Most of the people who have the models, especially the leading ones, treat them as trade secrets and also try to claim copyright in them. You can only recreate them if you have millions to spend and the aforementioned big data center.
> The ML stuff they're worried about takes a giant data center to train and an unusually beefy computer even to run.
Now, consider this: the Palm [1] couldn’t even create an RSA [2] public/private key pair in “user time” The pace of technological advancement is astonishing, and new techniques continually emerge to overcome current limitations. For example, in 1980, Intel was selling mathematical coprocessors [3] that were cutting-edge at the time but would be laughable today. It’s reasonable to expect that the field of machine learning will follow a similar trajectory, making what seems computationally impractical now far more accessible in the future.
[1] https://en.wikipedia.org/wiki/Palm_(PDA)
It's for the government's google analytics account, which is open data: https://analytics.usa.gov/general-services-administration
What’s the point in this? Isn’t Trump going to just cancel this immediately on Monday?
I don’t see how we can assume it will be enacted at all.
I'm not sure why the link no longer works, but this one works. The link should be updated to this one: https://www.federalregister.gov/documents/2025/01/15/2025-00...
I have no idea if comments actually have any impact, but here is the comment I left on the document:
I am Christopher Kanan, a professor and AI researcher at the University of Rochester with over 20 years of experience in artificial intelligence and deep learning. Previously, I led AI research and development at Paige, a medical AI company, where I worked on FDA-regulated AI systems for medical imaging. Based on this experience, I would like to provide feedback on the proposed export control regulations regarding compute thresholds for AI training, particularly models requiring 10^26 computational operations.
The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities. Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models. It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets. Lastly, many companies trying to scale large language models beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.
Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk. Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.
Without careful refinement, these rules risk stifling innovation, especially for small companies and academic researchers, while leaving important developments unregulated. I urge policymakers to engage with industry and academic experts to refocus regulations on specific applications rather than broadly targeting compute usage. AI regulation must evolve with the field to remain effective and balanced.
---
Of course, I have no skin in the game since I barely have any compute available to me as an academic, but the proposed rules on compute just don't make any sense to me.