Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."
Trump signed an Executive Order (Dec 2025) preempting state AI safety laws, threatening to withhold $42.5B in broadband funding from states that refuse to comply (specifically targeting Colorado and California).
In response, New York signed the "RAISE Act" after the EO was issued. It has strict safety, transparency, and reporting protocols for frontier models.
California is enforcing its "Transparency in Frontier AI Act" (Sept 2025) regardless of the Federal threat. It requires developers of large AI models (over 10^26 FLOPS) to publicly disclose safety frameworks, report "catastrophic risk" incidents, protect whistleblowers.. etc
Big Tech (OpenAI, Google, Andreessen Horowitz) is siding with Trump on this one. They prefer one weak federal law to 50 strict state laws.
This post:
Red states are creating deregulation areas. If a big tech company has data centers in Montana, and CA tries to impose an audit on their model, the company can sue, claiming their "Civil Rights" in Montana are being infringed by California's overreach.
Red states are tying "Compute" to the First Amendment (free expression), basically anticipating the Supreme Court.
"So, hypothetically, in a state with a right-to-compute law on the books, any bill put forward to limit AI or computation, even to prevent harm, could be halted while the courts worked it out. That could include laws limiting data centers as well.
“The government has to prove regulation is absolutely necessary and there’s no less restrictive way to do it,” Wilcox said. “Most oversight can’t clear that bar. That’s the point. Pre-deployment safety testing? Algorithmic bias audits? Transparency requirements? All would face legal challenge. "
My take: This sounds incredibly pro-industry and anti-democratic.
It's really funny for how all the talk of AI safety what has resulted is precisely the exact series of steps one would take if one were to intentionally design some kind of dystopian AI system.
> Similar to how free speech doesn't mean you can yell “Fire!” in a crowded theater
While I appreciate bringing attention to ongoing changes in the tech/legal landscape, I'll get my rundowns from a source that doesn't blindly repeat this broken assertion. Doesn't speak well of their research practices.
Yeah, that quote was "mere dicta" from the first day (the case wasn't about shouting fire in a theater, it was about distributing pamphlets opposing the draft), and the actual holding of the case the quote is from was overturned more than half a century ago.
Hasn't stopped every authoritarian from parroting the quote whenever they want to censor something.
Despite its history, it’s still a valid example of an exception to the First Amendment under current law. The problem is that most people who cite it are using it as an analogy for something else that isn’t.
> Despite its history, it’s still a valid example of an exception to the First Amendment under current law.
Is it though? If you're putting on a play, and there is a fire in the script, e.g. in a play criticizing that decision, can the government punish you for putting on the play because of the risk it could cause a panic? If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
Not only is it useless as an analogy for doing any reasoning, the thing itself is so overbroad that even the unqualified literal interpretation is more of a prohibition than would actually be permissible.
None of your examples is what is meant by "Shouting fire in a crowded theatre." The quote is expressly about falsely shouting fire, not as part of the play, not as an honest act of attempting to alert people to a dangerous situation. The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."
> If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
(IANAL) The law almost always takes circumstance into consideration, and AIUI, comes to reasonable conclusions in this case. The Wikipedia article on this quote[1] goes into that:
> Ultimately, whether it is legal in the United States to falsely shout "fire" in a theater depends on the circumstances in which it is done and the consequences of doing it. The act of shouting "fire" when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise. If it causes a stampede and someone is killed as a result, then the act could amount to a crime, such as involuntary manslaughter, assuming the other elements of that crime are made out. Similarly, state laws such as Colorado Revised Statute § 18-8-111 classify knowingly "false reporting of an emergency," including false alarms of fire, as a misdemeanor if the occupants of the building are caused to be evacuated or displaced, and a felony if the emergency response results in the serious bodily injury or death of another person.
(It continues with other jurisdictions and situations.)
Including, from a modern free speech advocacy perspective, the original use of the analogy, which was about forbidding people from advocating resistance against a military draft!
Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."
The actual statute: https://archive.legmt.gov/content/Sessions/69th/Contractor_i...
Seems pretty vague to me, but IANAL.
No mention of DRM. Shame.
Background:
Trump signed an Executive Order (Dec 2025) preempting state AI safety laws, threatening to withhold $42.5B in broadband funding from states that refuse to comply (specifically targeting Colorado and California).
In response, New York signed the "RAISE Act" after the EO was issued. It has strict safety, transparency, and reporting protocols for frontier models.
California is enforcing its "Transparency in Frontier AI Act" (Sept 2025) regardless of the Federal threat. It requires developers of large AI models (over 10^26 FLOPS) to publicly disclose safety frameworks, report "catastrophic risk" incidents, protect whistleblowers.. etc
Big Tech (OpenAI, Google, Andreessen Horowitz) is siding with Trump on this one. They prefer one weak federal law to 50 strict state laws.
This post:
Red states are creating deregulation areas. If a big tech company has data centers in Montana, and CA tries to impose an audit on their model, the company can sue, claiming their "Civil Rights" in Montana are being infringed by California's overreach.
Red states are tying "Compute" to the First Amendment (free expression), basically anticipating the Supreme Court.
Future implications:
The US continues to split into two distinct operating environments. https://www.economist.com/interactive/briefing/2022/09/03/am...
The goals of this law:
"So, hypothetically, in a state with a right-to-compute law on the books, any bill put forward to limit AI or computation, even to prevent harm, could be halted while the courts worked it out. That could include laws limiting data centers as well.
“The government has to prove regulation is absolutely necessary and there’s no less restrictive way to do it,” Wilcox said. “Most oversight can’t clear that bar. That’s the point. Pre-deployment safety testing? Algorithmic bias audits? Transparency requirements? All would face legal challenge. "
My take: This sounds incredibly pro-industry and anti-democratic.
And scary. Really scary.
It's really funny for how all the talk of AI safety what has resulted is precisely the exact series of steps one would take if one were to intentionally design some kind of dystopian AI system.
> Similar to how free speech doesn't mean you can yell “Fire!” in a crowded theater
While I appreciate bringing attention to ongoing changes in the tech/legal landscape, I'll get my rundowns from a source that doesn't blindly repeat this broken assertion. Doesn't speak well of their research practices.
Yeah, that quote was "mere dicta" from the first day (the case wasn't about shouting fire in a theater, it was about distributing pamphlets opposing the draft), and the actual holding of the case the quote is from was overturned more than half a century ago.
Hasn't stopped every authoritarian from parroting the quote whenever they want to censor something.
Despite its history, it’s still a valid example of an exception to the First Amendment under current law. The problem is that most people who cite it are using it as an analogy for something else that isn’t.
> Despite its history, it’s still a valid example of an exception to the First Amendment under current law.
Is it though? If you're putting on a play, and there is a fire in the script, e.g. in a play criticizing that decision, can the government punish you for putting on the play because of the risk it could cause a panic? If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
Not only is it useless as an analogy for doing any reasoning, the thing itself is so overbroad that even the unqualified literal interpretation is more of a prohibition than would actually be permissible.
None of your examples is what is meant by "Shouting fire in a crowded theatre." The quote is expressly about falsely shouting fire, not as part of the play, not as an honest act of attempting to alert people to a dangerous situation. The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."
> If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
(IANAL) The law almost always takes circumstance into consideration, and AIUI, comes to reasonable conclusions in this case. The Wikipedia article on this quote[1] goes into that:
> Ultimately, whether it is legal in the United States to falsely shout "fire" in a theater depends on the circumstances in which it is done and the consequences of doing it. The act of shouting "fire" when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise. If it causes a stampede and someone is killed as a result, then the act could amount to a crime, such as involuntary manslaughter, assuming the other elements of that crime are made out. Similarly, state laws such as Colorado Revised Statute § 18-8-111 classify knowingly "false reporting of an emergency," including false alarms of fire, as a misdemeanor if the occupants of the building are caused to be evacuated or displaced, and a felony if the emergency response results in the serious bodily injury or death of another person.
(It continues with other jurisdictions and situations.)
[1]: https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...
Including, from a modern free speech advocacy perspective, the original use of the analogy, which was about forbidding people from advocating resistance against a military draft!
https://en.wikipedia.org/wiki/Schenck_v._United_States