blog

December 10, 2025

AI laws in 2026: the current EU–US landscape and how it shapes software development

As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.


For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.



Europe in 2026: the AI Act moves forward, but with a softer runway


The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:

  • Unacceptable-risk AI systems are already banned.
  • Transparency duties for limited-risk AI (chatbots, deepfakes) apply.
  • General-purpose AI (GPAI) rules begin applying throughout 2025–2026, including documentation, training-data summaries, and model-provider safety processes.
  • High-risk AI obligations were originally planned to start phasing in from 2 August 2026, but the Commission has proposed roughly a one-year delay for many high-risk systems, pushing most enforcement into 2027 (subject to approval by the European Parliament and Member States).


This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.


GPAI governance becomes central

With foundation models now integrated into almost every AI product, two mechanisms matter:

  • The European AI Office supervises GPAI providers and can request safety documentation and risk data.
  • The GPAI Code of Practice developed in 2025 sets voluntary—but increasingly expected—standards for transparency, training-data disclosure, robustness testing, and copyright handling.

For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.


Liability rules reshape software architecture

The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.


This means:

  • Defects in AI features can trigger strict liability.
  • Importers, hosting platforms, and fulfilment services can also be liable.
  • Documentation, versioning, testing records, and explainability become legally relevant evidence.

For enterprise software, this pulls AI risk management into the same category as product-safety engineering.



United States in 2026: lighter federal control, heavier state enforcement

With the change in administration, federal AI policy has shifted:

  • Biden’s 2023 AI executive order on “safe, secure, and trustworthy AI” (Executive Order 14110) was revoked in 2025.
  • The new federal approach stresses removing perceived barriers to AI innovation.
  • A national AI governance framework is being drafted via executive order to establish a single federal standard and reduce the compliance friction created by differing state-level AI rules, although the details and the extent of any preemption are still in flux.


This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.


In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:


Colorado

  • The Colorado AI Act focuses on preventing algorithmic discrimination in “high-risk” uses.
  • Duties apply to both developers and deployers.
  • Enforcement for high-risk systems begins on 1 February 2026, with further amendments expected to refine key definitions and obligations.


California

  • Automated decision-making systems used in employment and consumer contexts face disclosure, record-keeping, and anti-bias duties.
  • Additional rules require large AI developers to document catastrophic-risk safeguards.
  • Consumer-protection enforcement against misleading or unsafe AI systems is rising quickly.


Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.


The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.



What this means for engineering teams in 2026


1. Compliance becomes part of system design

Software architecture is now shaped by questions such as:

  • Will this feature fall under the EU’s high-risk list?
  • Does it process data that triggers sectoral rules in the US?
  • Do we need human oversight steps?
  • Should logs be immutable for auditability?
  • Which model providers meet EU documentation expectations?

Designing compliance in early prevents expensive retrofits later.


2. Provider vs deployer roles must be defined for every project

Regulation on both continents uses a split between:

  • Providers — those who build and market an AI system.
  • Deployers — those who use an AI system inside their own workflows.


Contracts should reflect:

  • Who owns model governance obligations.
  • Who monitors outputs and maintains incident logs.
  • Who provides risk assessments and data documentation.
  • Who handles user transparency duties.

Without this split, cross-border deployments become unnecessarily risky.


3. Documentation standards rise sharply

Because liability and regulatory duties rely on evidence, teams should maintain:

  • Data-source and preprocessing documentation
  • Evaluation and robustness test suites
  • Change logs for models and prompts
  • Output-level monitoring for bias or drift
  • Human-oversight procedures where required

This is as much operational discipline as regulatory necessity.


4. Foundation-model due diligence becomes routine

When choosing an LLM or multimodal model:

  • Prefer providers aligned with the GPAI Code of Practice.
  • Require documentation on training data categories and safety testing.
  • Verify that copyright handling policies match EU expectations.
  • Ensure US state-level risk disclosures can be met.
  • Negotiate SLAs for model updates related to safety or compliance.

By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.


5. Global solutions need a “highest common denominator” approach

The simplest strategy for companies working across the EU and US is:

  • Use the EU AI Act + EU product liability rules as the design baseline.
  • Add state-level constraints for sensitive US sectors (employment, credit, housing, healthcare).
  • Use the NIST AI Risk Management Framework to structure internal governance.

This avoids fragmented engineering and satisfies regulators on both sides.



What companies should prioritise in 2026–2027


To keep AI projects safe, scalable, and legally resilient, software teams should focus on:

  1. Mapping all AI use cases and classifying them under EU and US rules.
  2. Implementing governance mechanisms aligned with NIST and the AI Act’s risk-management logic.
  3. Building transparent data and testing pipelines rather than ad-hoc model evaluation.
  4. Defining deployment responsibilities clearly in contracts.
  5. Selecting foundation models with mature documentation and safety processes.
  6. Preparing for audits by maintaining complete lifecycle records.
  7. Designing user-facing transparency where required (chatbots, automated decisions, synthetic media).


Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.



How Blocshop helps teams build compliant, production-ready AI


AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.


Blocshop supports organisations across the full development lifecycle:

  • Architecture and solution design aligned with EU and US regulatory expectations
  • Implementation of AI governance processes using practical frameworks familiar to engineering teams
  • Integration of foundation models with the testing, logging, and transparency artefacts regulators now expect
  • Modernisation of existing systems so AI features can be added safely and auditable from day one
  • Cross-border compliance alignment for companies operating in multiple jurisdictions


If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.


Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.


SCHEDULE A CALL

blog

December 10, 2025

AI laws in 2026: the current EU–US landscape and how it shapes software development

As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.


For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.



Europe in 2026: the AI Act moves forward, but with a softer runway


The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:

  • Unacceptable-risk AI systems are already banned.
  • Transparency duties for limited-risk AI (chatbots, deepfakes) apply.
  • General-purpose AI (GPAI) rules begin applying throughout 2025–2026, including documentation, training-data summaries, and model-provider safety processes.
  • High-risk AI obligations were originally planned to start phasing in from 2 August 2026, but the Commission has proposed roughly a one-year delay for many high-risk systems, pushing most enforcement into 2027 (subject to approval by the European Parliament and Member States).


This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.


GPAI governance becomes central

With foundation models now integrated into almost every AI product, two mechanisms matter:

  • The European AI Office supervises GPAI providers and can request safety documentation and risk data.
  • The GPAI Code of Practice developed in 2025 sets voluntary—but increasingly expected—standards for transparency, training-data disclosure, robustness testing, and copyright handling.

For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.


Liability rules reshape software architecture

The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.


This means:

  • Defects in AI features can trigger strict liability.
  • Importers, hosting platforms, and fulfilment services can also be liable.
  • Documentation, versioning, testing records, and explainability become legally relevant evidence.

For enterprise software, this pulls AI risk management into the same category as product-safety engineering.



United States in 2026: lighter federal control, heavier state enforcement

With the change in administration, federal AI policy has shifted:

  • Biden’s 2023 AI executive order on “safe, secure, and trustworthy AI” (Executive Order 14110) was revoked in 2025.
  • The new federal approach stresses removing perceived barriers to AI innovation.
  • A national AI governance framework is being drafted via executive order to establish a single federal standard and reduce the compliance friction created by differing state-level AI rules, although the details and the extent of any preemption are still in flux.


This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.


In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:


Colorado

  • The Colorado AI Act focuses on preventing algorithmic discrimination in “high-risk” uses.
  • Duties apply to both developers and deployers.
  • Enforcement for high-risk systems begins on 1 February 2026, with further amendments expected to refine key definitions and obligations.


California

  • Automated decision-making systems used in employment and consumer contexts face disclosure, record-keeping, and anti-bias duties.
  • Additional rules require large AI developers to document catastrophic-risk safeguards.
  • Consumer-protection enforcement against misleading or unsafe AI systems is rising quickly.


Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.


The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.



What this means for engineering teams in 2026


1. Compliance becomes part of system design

Software architecture is now shaped by questions such as:

  • Will this feature fall under the EU’s high-risk list?
  • Does it process data that triggers sectoral rules in the US?
  • Do we need human oversight steps?
  • Should logs be immutable for auditability?
  • Which model providers meet EU documentation expectations?

Designing compliance in early prevents expensive retrofits later.


2. Provider vs deployer roles must be defined for every project

Regulation on both continents uses a split between:

  • Providers — those who build and market an AI system.
  • Deployers — those who use an AI system inside their own workflows.


Contracts should reflect:

  • Who owns model governance obligations.
  • Who monitors outputs and maintains incident logs.
  • Who provides risk assessments and data documentation.
  • Who handles user transparency duties.

Without this split, cross-border deployments become unnecessarily risky.


3. Documentation standards rise sharply

Because liability and regulatory duties rely on evidence, teams should maintain:

  • Data-source and preprocessing documentation
  • Evaluation and robustness test suites
  • Change logs for models and prompts
  • Output-level monitoring for bias or drift
  • Human-oversight procedures where required

This is as much operational discipline as regulatory necessity.


4. Foundation-model due diligence becomes routine

When choosing an LLM or multimodal model:

  • Prefer providers aligned with the GPAI Code of Practice.
  • Require documentation on training data categories and safety testing.
  • Verify that copyright handling policies match EU expectations.
  • Ensure US state-level risk disclosures can be met.
  • Negotiate SLAs for model updates related to safety or compliance.

By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.


5. Global solutions need a “highest common denominator” approach

The simplest strategy for companies working across the EU and US is:

  • Use the EU AI Act + EU product liability rules as the design baseline.
  • Add state-level constraints for sensitive US sectors (employment, credit, housing, healthcare).
  • Use the NIST AI Risk Management Framework to structure internal governance.

This avoids fragmented engineering and satisfies regulators on both sides.



What companies should prioritise in 2026–2027


To keep AI projects safe, scalable, and legally resilient, software teams should focus on:

  1. Mapping all AI use cases and classifying them under EU and US rules.
  2. Implementing governance mechanisms aligned with NIST and the AI Act’s risk-management logic.
  3. Building transparent data and testing pipelines rather than ad-hoc model evaluation.
  4. Defining deployment responsibilities clearly in contracts.
  5. Selecting foundation models with mature documentation and safety processes.
  6. Preparing for audits by maintaining complete lifecycle records.
  7. Designing user-facing transparency where required (chatbots, automated decisions, synthetic media).


Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.



How Blocshop helps teams build compliant, production-ready AI


AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.


Blocshop supports organisations across the full development lifecycle:

  • Architecture and solution design aligned with EU and US regulatory expectations
  • Implementation of AI governance processes using practical frameworks familiar to engineering teams
  • Integration of foundation models with the testing, logging, and transparency artefacts regulators now expect
  • Modernisation of existing systems so AI features can be added safely and auditable from day one
  • Cross-border compliance alignment for companies operating in multiple jurisdictions


If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.


Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.


SCHEDULE A CALL

logo blocshop

Let's talk!

blog

December 10, 2025

AI laws in 2026: the current EU–US landscape and how it shapes software development

As 2026 begins, AI development is shaped as much by regulation as by technology. The EU is entering the first real enforcement phases of the AI Act, while the US is shifting toward a mix of federal deregulation and increasingly assertive state rules.


For software teams, this regulatory split affects design choices, hosting decisions, procurement of foundation models, and how AI features must be monitored throughout their lifecycle. This article outlines what has actually changed, what will apply over the next 24 months, and how engineering teams can plan projects that stay compliant across markets.



Europe in 2026: the AI Act moves forward, but with a softer runway


The EU AI Act introduced a four-tier risk structure (unacceptable, high, limited, minimal) and began its first obligations in 2025. The immediate milestones now relevant in early 2026 are:

  • Unacceptable-risk AI systems are already banned.
  • Transparency duties for limited-risk AI (chatbots, deepfakes) apply.
  • General-purpose AI (GPAI) rules begin applying throughout 2025–2026, including documentation, training-data summaries, and model-provider safety processes.
  • High-risk AI obligations were originally planned to start phasing in from 2 August 2026, but the Commission has proposed roughly a one-year delay for many high-risk systems, pushing most enforcement into 2027 (subject to approval by the European Parliament and Member States).


This does not reduce compliance expectations; it simply gives providers and deployers a clearer runway.


GPAI governance becomes central

With foundation models now integrated into almost every AI product, two mechanisms matter:

  • The European AI Office supervises GPAI providers and can request safety documentation and risk data.
  • The GPAI Code of Practice developed in 2025 sets voluntary—but increasingly expected—standards for transparency, training-data disclosure, robustness testing, and copyright handling.

For companies consuming models rather than training their own, this means vendor selection is becoming a compliance decision. Expect richer model cards, documentation, and contractual commitments from any provider serving EU clients.


Liability rules reshape software architecture

The dedicated AI Liability Directive was formally withdrawn in early 2025, but the revised EU Product Liability Directive now explicitly treats software, including AI systems, as a “product”, with the new rules applying to products placed on the EU market from 9 December 2026.


This means:

  • Defects in AI features can trigger strict liability.
  • Importers, hosting platforms, and fulfilment services can also be liable.
  • Documentation, versioning, testing records, and explainability become legally relevant evidence.

For enterprise software, this pulls AI risk management into the same category as product-safety engineering.



United States in 2026: lighter federal control, heavier state enforcement

With the change in administration, federal AI policy has shifted:

  • Biden’s 2023 AI executive order on “safe, secure, and trustworthy AI” (Executive Order 14110) was revoked in 2025.
  • The new federal approach stresses removing perceived barriers to AI innovation.
  • A national AI governance framework is being drafted via executive order to establish a single federal standard and reduce the compliance friction created by differing state-level AI rules, although the details and the extent of any preemption are still in flux.


This framework is not yet a single AI law, and it will not eliminate state regulations, but it signals a more growth-oriented federal posture.


In general, it is the particular states who are now the real compliance drivers. Two states matter most for software teams:


Colorado

  • The Colorado AI Act focuses on preventing algorithmic discrimination in “high-risk” uses.
  • Duties apply to both developers and deployers.
  • Enforcement for high-risk systems begins on 1 February 2026, with further amendments expected to refine key definitions and obligations.


California

  • Automated decision-making systems used in employment and consumer contexts face disclosure, record-keeping, and anti-bias duties.
  • Additional rules require large AI developers to document catastrophic-risk safeguards.
  • Consumer-protection enforcement against misleading or unsafe AI systems is rising quickly.


Beyond these states, attorneys-general across the US are launching investigations into harmful model behaviour, misleading claims, and unsafe generative features.


The result: even without a federal AI Act, enforcement risk is real, especially in hiring, credit, healthcare, financial services, and children-facing applications.



What this means for engineering teams in 2026


1. Compliance becomes part of system design

Software architecture is now shaped by questions such as:

  • Will this feature fall under the EU’s high-risk list?
  • Does it process data that triggers sectoral rules in the US?
  • Do we need human oversight steps?
  • Should logs be immutable for auditability?
  • Which model providers meet EU documentation expectations?

Designing compliance in early prevents expensive retrofits later.


2. Provider vs deployer roles must be defined for every project

Regulation on both continents uses a split between:

  • Providers — those who build and market an AI system.
  • Deployers — those who use an AI system inside their own workflows.


Contracts should reflect:

  • Who owns model governance obligations.
  • Who monitors outputs and maintains incident logs.
  • Who provides risk assessments and data documentation.
  • Who handles user transparency duties.

Without this split, cross-border deployments become unnecessarily risky.


3. Documentation standards rise sharply

Because liability and regulatory duties rely on evidence, teams should maintain:

  • Data-source and preprocessing documentation
  • Evaluation and robustness test suites
  • Change logs for models and prompts
  • Output-level monitoring for bias or drift
  • Human-oversight procedures where required

This is as much operational discipline as regulatory necessity.


4. Foundation-model due diligence becomes routine

When choosing an LLM or multimodal model:

  • Prefer providers aligned with the GPAI Code of Practice.
  • Require documentation on training data categories and safety testing.
  • Verify that copyright handling policies match EU expectations.
  • Ensure US state-level risk disclosures can be met.
  • Negotiate SLAs for model updates related to safety or compliance.

By 2026–2027, procurement teams will increasingly treat model providers like regulated infrastructure vendors.


5. Global solutions need a “highest common denominator” approach

The simplest strategy for companies working across the EU and US is:

  • Use the EU AI Act + EU product liability rules as the design baseline.
  • Add state-level constraints for sensitive US sectors (employment, credit, housing, healthcare).
  • Use the NIST AI Risk Management Framework to structure internal governance.

This avoids fragmented engineering and satisfies regulators on both sides.



What companies should prioritise in 2026–2027


To keep AI projects safe, scalable, and legally resilient, software teams should focus on:

  1. Mapping all AI use cases and classifying them under EU and US rules.
  2. Implementing governance mechanisms aligned with NIST and the AI Act’s risk-management logic.
  3. Building transparent data and testing pipelines rather than ad-hoc model evaluation.
  4. Defining deployment responsibilities clearly in contracts.
  5. Selecting foundation models with mature documentation and safety processes.
  6. Preparing for audits by maintaining complete lifecycle records.
  7. Designing user-facing transparency where required (chatbots, automated decisions, synthetic media).


Done well, these steps increase reliability, accelerate integration cycles, and make AI components easier to scale globally.



How Blocshop helps teams build compliant, production-ready AI


AI regulation is evolving, but the direction is consistent: companies need to build AI features with traceability, reliable documentation, structured risk management, and clear deployment controls. This is where engineering discipline matters as much as model choice.


Blocshop supports organisations across the full development lifecycle:

  • Architecture and solution design aligned with EU and US regulatory expectations
  • Implementation of AI governance processes using practical frameworks familiar to engineering teams
  • Integration of foundation models with the testing, logging, and transparency artefacts regulators now expect
  • Modernisation of existing systems so AI features can be added safely and auditable from day one
  • Cross-border compliance alignment for companies operating in multiple jurisdictions


If you are planning new AI-powered features or need to bring existing systems into line with upcoming obligations, Blocshop can help you set up a development approach that meets regulatory expectations without slowing engineering velocity.


Reach out to Blocshop to discuss your roadmap and explore how we can support your AI initiatives.


SCHEDULE A CALL

logo blocshop

Let's talk!