Regulating AI Models that Can Learn
If AI labs develop continual learning, that could pose a big challenge for existing regulatory paradigms
I have a post over at Lawfare today discussing continual learning—the goal many AI developers have to create models that can learn not just from their initial training, but from their day-to-day use. It turns out that this is a very challenging technical problem, but also one with huge potential upside.
If you haven’t thought much about the issue of continual learning, I’d recommend this blog post by Dwarkesh Patel about the bottleneck created by the inability of models to learn more like humans do. I’d also recommend this piece on the related issue of “context rot” by Timothy B. Lee. They provide good context and perspective for why this is such an important issue for AI labs and users.
My writing at Lawfare asks about what the development of continual learning might mean from a regulatory perspective. There is lots of conversation about better or worse paradigms for regulating AI labs and/or models, but there has not been much focus yet on whether or how our regulatory paradigms could handle continual learning. Thinking about regulating a technology that doesn’t exist is inherently speculative, of course. But given how important a goal continual learning is, and given the potential implications for regulation, I think it’s worth spending some time issue spotting.
A key part:
Test-time training could erode the connection between knowledge of models and control over models, by allowing the most capable models to be developed and changed in meaningful, permanent ways by users. Almost all users are going to be much less knowledgeable than AI developers—less able to run tests and studies on their models, less informed about the broader technical landscape, and so on. So test-time training shifts some of the control over whether outcomes are good or bad from centralized, knowledgeable actors (the AI labs) to diffuse, less-skilled actors (users). Because developers will have lost some of their control over their models’ capabilities and tendencies, it may be harder to hold them liable for certain outcomes.
This shift in control also poses a challenge for one of the paradigms that has received more emerging support recently: regulations that take AI companies themselves as the targets of regulation, rather than focusing on regulating those companies’ products directly. As it has become clearer that regulations focused on models themselves have important limitations, thoughtful commentators have advocated for “organization-level” or “entity-based” regulatory approaches that focus on the policies and practices of AI model developers. But if developments in continual learning mean that significant changes to models will happen after they are released to the world, that could undermine the efficacy of regulations that focus on courses of conduct that companies take in earlier stages when the models are first being developed. There are plenty of other reasons to support entity-based approaches to regulation, but continual learning may nonetheless be a challenge for them, should it arise.
If you’re interested, head over to Lawfare and check it out!

