New York Aims at Big Tech with Major AI Safety Law
Technology

New York Aims at Big Tech with Major AI Safety Law

RAISE Act imposes hefty fines and strict safety rules for AI models costing over $100M to develop, creating a new regulatory hurdle for industry leaders.

New York has established a significant new regulatory front for the artificial intelligence industry, with Governor Kathy Hochul signing into law a bill that directly targets the developers of the most powerful and expensive AI systems. The legislation, known as the Responsible AI Safety and Education (RAISE) Act, introduces stringent safety requirements and the threat of multi-million dollar penalties for non-compliance, creating a new operational reality for tech giants like Google, Microsoft, and Meta.

The law, signed by Governor Hochul, is specifically aimed at what it terms “frontier models.” These are defined as AI systems that cost over $100 million to train or utilize a vast amount of computing power. This high threshold squarely focuses the regulation on the handful of large, well-capitalized companies at the forefront of the generative AI race, including Google, Microsoft and its partner OpenAI, Anthropic, and Meta.

Under the new rules, developers of these frontier models must establish and maintain robust safety and security protocols, conduct thorough risk assessments, and report any “major safety incident” to the state’s Attorney General within 72 hours. The legislation carries substantial financial teeth, with civil penalties of up to $10 million for an initial violation.

The move marks one of the most assertive efforts within the United States to regulate the development of advanced AI, creating a state-level framework that has drawn comparisons to the European Union’s comprehensive AI Act. However, a key distinction lies in its assignment of liability. The New York law places the primary legal burden on the upstream developers of the foundational models, whereas the EU’s approach distributes obligations across various actors in the AI value chain.

The legislation passed despite significant opposition from the tech industry. Industry groups like the Computer & Communications Industry Association (CCIA) and Tech:NYC had voiced concerns that the act could stifle innovation and impose burdensome compliance costs that might drive AI projects out of the state. In a statement earlier this year, the CCIA urged the governor to reject the bill, arguing that it would create barriers to entry and disadvantage New York-based innovators.

While the final version of the bill was reportedly softened from earlier drafts—omitting requirements for perpetual third-party auditing, for instance—it still represents a landmark for AI governance in the U.S. It codifies a level of responsibility and public accountability for private-sector AI labs that, until now, had been largely self-imposed.

For investors and the companies themselves, the RAISE Act introduces a new layer of risk and expense. The cost of compliance, coupled with the potential for headline-making fines, will become a fixed part of the budget for frontier AI development. Companies have until March 2026 to prepare for full compliance, but the work of interpreting the law's requirements and building the necessary internal frameworks must begin immediately. This development in New York could also serve as a blueprint for other states, potentially leading to a patchwork of regulations across the country that complicates the path to market for new AI technologies.