blog
December 10, 2025
AI laws in 2026: the current EU–US landscape and how it shapes software development
As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.
For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.
The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:
This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.
With foundation models now integrated into almost every AI product, two mechanisms matter:
For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.
The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.
This means:
For enterprise software, this pulls AI risk management into the same category as product-safety engineering.
With the change in administration, federal AI policy has shifted:
This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.
In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:
Colorado
California
Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.
The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.
Software architecture is now shaped by questions such as:
Designing compliance in early prevents expensive retrofits later.
Regulation on both continents uses a split between:
Contracts should reflect:
Without this split, cross-border deployments become unnecessarily risky.
Because liability and regulatory duties rely on evidence, teams should maintain:
This is as much operational discipline as regulatory necessity.
When choosing an LLM or multimodal model:
By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.
The simplest strategy for companies working across the EU and US is:
This avoids fragmented engineering and satisfies regulators on both sides.
To keep AI projects safe, scalable, and legally resilient, software teams should focus on:
Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.
AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.
Blocshop supports organisations across the full development lifecycle:
If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.
Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.
Learn more from our insights
The journey to your
custom software
solution starts here.
Services
blog
December 10, 2025
AI laws in 2026: the current EU–US landscape and how it shapes software development
As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.
For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.
The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:
This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.
With foundation models now integrated into almost every AI product, two mechanisms matter:
For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.
The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.
This means:
For enterprise software, this pulls AI risk management into the same category as product-safety engineering.
With the change in administration, federal AI policy has shifted:
This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.
In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:
Colorado
California
Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.
The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.
Software architecture is now shaped by questions such as:
Designing compliance in early prevents expensive retrofits later.
Regulation on both continents uses a split between:
Contracts should reflect:
Without this split, cross-border deployments become unnecessarily risky.
Because liability and regulatory duties rely on evidence, teams should maintain:
This is as much operational discipline as regulatory necessity.
When choosing an LLM or multimodal model:
By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.
The simplest strategy for companies working across the EU and US is:
This avoids fragmented engineering and satisfies regulators on both sides.
To keep AI projects safe, scalable, and legally resilient, software teams should focus on:
Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.
AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.
Blocshop supports organisations across the full development lifecycle:
If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.
Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.
Learn more from our insights
Let's talk!
The journey to your
custom software
solution starts here.
Services
Head Office
Revoluční 1
110 00, Prague Czech Republic
hello@blocshop.io
blog
December 10, 2025
AI laws in 2026: the current EU–US landscape and how it shapes software development
As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.
For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.
The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:
This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.
With foundation models now integrated into almost every AI product, two mechanisms matter:
For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.
The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.
This means:
For enterprise software, this pulls AI risk management into the same category as product-safety engineering.
With the change in administration, federal AI policy has shifted:
This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.
In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:
Colorado
California
Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.
The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.
Software architecture is now shaped by questions such as:
Designing compliance in early prevents expensive retrofits later.
Regulation on both continents uses a split between:
Contracts should reflect:
Without this split, cross-border deployments become unnecessarily risky.
Because liability and regulatory duties rely on evidence, teams should maintain:
This is as much operational discipline as regulatory necessity.
When choosing an LLM or multimodal model:
By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.
The simplest strategy for companies working across the EU and US is:
This avoids fragmented engineering and satisfies regulators on both sides.
To keep AI projects safe, scalable, and legally resilient, software teams should focus on:
Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.
AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.
Blocshop supports organisations across the full development lifecycle:
If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.
Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.
Learn more from our insights
Let's talk!
The journey to your
custom software solution starts here.
Services