blog

NOVEMBER 3, 2025

•7 min read

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.

Does software really need CE marking under the AI Act?

Short answer is “yes” — if the software falls within the Act’s high-risk scope.

Condition

CE marking required?

Stand-alone AI with minimal risk (chatbots, content tools)

❌ No

AI embedded in CE-regulated products (medical devices, machinery, automotive)

✅ Yes — existing product law + AI Act obligations

Stand-alone AI listed in Annex III (high-risk use cases)

✅ Yes

Foundation models / GPAI

⚠️ Separate transparency & model-governance rules (Chapter V (Articles 53–55)) – no CE mark

Internal-only software, not marketed in EU

❌ No, but be careful: ⚠️ “Putting into service” includes own use. If you develop a high-risk system for your own use, you’re still a provider putting it into service; the high-risk obligations (including conformity assessment and CE marking) can still apply.

High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.

Who is responsible for CE marking?

This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):

Actor

Obligation

Provider (developer/vendor)

Primary responsibility to carry out conformity assessment and affix CE mark (Art. 43 & 49(1))

Importer

Must verify provider’s conformity docs and CE mark before placing on market ([Art. 22])

Distributor

May only supply conforming AI systems ([Art. 23])

Deployer (user)

Must operate according to instructions and report serious incidents ([Art. 29 & 62])

Non-EU provider selling into EU

Must appoint an EU authorised representative ([Art. 20]) and still obtain CE mark

Who is responsible for CE marking?

This is clearly defined in Articles 16–25 of the AI Act:

How the conformity assessment works

This is outlined in Article 43 and Annex VII:

Route

When used

Who performs It

Internal conformity assessment

For most stand-alone high-risk AI systems

Provider (self-assessment)

Third-party assessment via Notified Body

When AI is part of products regulated under sectoral CE laws or when harmonised standards aren’t applied

Independent Notified Body designated by Member State

Distributor

May only supply conforming AI systems ([Art. 23])

Deployer (user)

Must operate according to instructions and report serious incidents ([Art. 29 & 62])

Non-EU provider selling into EU

Must appoint an EU authorised representative ([Art. 20]) and still obtain CE mark

Key elements required before CE marking

General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties

Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start

With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach

Aspect

EU AI Act (Reg. 2024/1689)

UK Policy (“Pro-Innovation Framework for AI”, DSIT 2024)

Legal status

Binding Regulation

Non-binding cross-regulator guidance

Conformity marking

Mandatory CE mark for high-risk AI

No equivalent UKCA mark for AI

Oversight

European AI Office + national authorities

FCA, ICO, MHRA, Ofcom, CMA etc.

Assurance mechanism

Conformity assessment (Art. 43)

Voluntary algorithmic assurance pilots (Alan Turing Institute / AI Standards Hub)

International effect

Applies to all AI on EU market

UK companies selling into EU must comply with EU Act

Developer checklist for AI Act CE marking

Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market

✅ Get help from Blocshop

Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems

Book a free consultation with Blocshop to assess your AI system’s compliance readiness.

Learn more from our insights

cover-img

NOVEMBER 3, 2025 • 7 min read

CE marking software under the EU AI Act – who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service.

cover-img

October 19, 2025 • 7 min read

EU and UK AI regulation compared: implications for software, data, and AI projects

Both the European Union and the United Kingdom are shaping distinct—but increasingly convergent—approaches to AI regulation.

For companies developing or deploying AI solutions across both regions, understanding these differences is not an academic exercise. It directly affects how software and data projects are planned, documented, and maintained.

cover-img

October 9, 2025 • 5 min read

When AI and GDPR meet: navigating the tension between AI and data protection

When AI-powered systems process or generate personal data, they enter a regulatory minefield — especially under the EU’s General Data Protection Regulation (GDPR) and the emerging EU AI Act regime

cover-img

September 17, 2025 • 4 min read

6 AI integration use cases enterprises can adopt for automation and decision support

 

The question for most companies is no longer if they should use AI, but where it will bring a measurable impact. 

logo blocshop

Let's talk!

blog

NOVEMBER 3, 2025

•7 min read

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.

Does software really need CE marking under the AI Act?

Short answer is “yes” — if the software falls within the Act’s high-risk scope.

Condition

CE marking required?

Stand-alone AI with minimal risk (chatbots, content tools)

❌ No

AI embedded in CE-regulated products (medical devices, machinery, automotive)

✅ Yes — existing product law + AI Act obligations

Stand-alone AI listed in Annex III (high-risk use cases)

✅ Yes

Foundation models / GPAI

⚠️ Separate transparency & model-governance rules (Chapter V (Articles 53–55)) – no CE mark

Internal-only software, not marketed in EU

❌ No, but be careful: ⚠️ “Putting into service” includes own use. If you develop a high-risk system for your own use, you’re still a provider putting it into service; the high-risk obligations (including conformity assessment and CE marking) can still apply.

High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.

Who is responsible for CE marking?

This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):

Who is responsible for CE marking?

This is clearly defined in Articles 16–25 of the AI Act:

Actor

Obligation

Provider (developer/vendor)

Primary responsibility to carry out conformity assessment and affix CE mark (Art. 43 & 49(1))

Importer

Must verify provider’s conformity docs and CE mark before placing on market ([Art. 22])

Distributor

May only supply conforming AI systems ([Art. 23])

Deployer (user)

Must operate according to instructions and report serious incidents ([Art. 29 & 62])

Non-EU provider selling into EU

Must appoint an EU authorised representative ([Art. 20]) and still obtain CE mark

How the conformity assessment works

This is outlined in Article 43 and Annex VII:

Route

When used

Who performs It

Internal conformity assessment

For most stand-alone high-risk AI systems

Provider (self-assessment)

Third-party assessment via Notified Body

When AI is part of products regulated under sectoral CE laws or when harmonised standards aren’t applied

Independent Notified Body designated by Member State

Distributor

May only supply conforming AI systems ([Art. 23])

Deployer (user)

Must operate according to instructions and report serious incidents ([Art. 29 & 62])

Non-EU provider selling into EU

Must appoint an EU authorised representative ([Art. 20]) and still obtain CE mark

Key elements required before CE marking

General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties

Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start

With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach

Aspect

EU AI Act (Reg. 2024/1689)

UK Policy (“Pro-Innovation Framework for AI”, DSIT 2024)

Legal status

Binding Regulation

Non-binding cross-regulator guidance

Conformity marking

Mandatory CE mark for high-risk AI

No equivalent UKCA mark for AI

Oversight

European AI Office + national authorities

FCA, ICO, MHRA, Ofcom, CMA etc.

Assurance mechanism

Conformity assessment (Art. 43)

Voluntary algorithmic assurance pilots (Alan Turing Institute / AI Standards Hub)

International effect

Applies to all AI on EU market

UK companies selling into EU must comply with EU Act

Developer checklist for AI Act CE marking

Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market

✅ Get help from Blocshop

Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems

Book a free consultation with Blocshop to assess your AI system’s compliance readiness.

Learn more from our insights

cover-img

NOVEMBER 3, 2025 • 7 min read

CE marking software under the EU AI Act – who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service.

cover-img

October 19, 2025 • 7 min read

EU and UK AI regulation compared: implications for software, data, and AI projects

Both the European Union and the United Kingdom are shaping distinct—but increasingly convergent—approaches to AI regulation.

For companies developing or deploying AI solutions across both regions, understanding these differences is not an academic exercise. It directly affects how software and data projects are planned, documented, and maintained.

cover-img

October 9, 2025 • 5 min read

When AI and GDPR meet: navigating the tension between AI and data protection

When AI-powered systems process or generate personal data, they enter a regulatory minefield — especially under the EU’s General Data Protection Regulation (GDPR) and the emerging EU AI Act regime

cover-img

September 17, 2025 • 4 min read

6 AI integration use cases enterprises can adopt for automation and decision support

 

The question for most companies is no longer if they should use AI, but where it will bring a measurable impact. 

logo blocshop

Let's talk!

blog

NOVEMBER 3, 2025

•7 min read

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

cover-img

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.

Does software really need CE marking under the AI Act?

Short answer is “yes” — if the software falls within the Act’s high-risk scope.

Condition

CE marking required?

Stand-alone AI with minimal risk (chatbots, content tools)

❌ No

AI embedded in CE-regulated products (medical devices, machinery, automotive)

✅ Yes — existing product law + AI Act obligations

Stand-alone AI listed in Annex III (high-risk use cases)

✅ Yes

Foundation models / GPAI

⚠️ Separate transparency & model-governance rules (Chapter V (Articles 53–55)) – no CE mark

Internal-only software, not marketed in EU

❌ No, but be careful: ⚠️ “Putting into service” includes own use. If you develop a high-risk system for your own use, you’re still a provider putting it into service; the high-risk obligations (including conformity assessment and CE marking) can still apply.

High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.

cover-img

Who is responsible for CE marking?

This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):

Actor

Obligation

Provider (developer/vendor)

Primary responsibility to carry out conformity assessment and affix CE mark (Art. 43 & 49(1))

Importer

Must verify provider’s conformity docs and CE mark before placing on market ([Art. 22])

Distributor

May only supply conforming AI systems ([Art. 23])

Deployer (user)

Must operate according to instructions and report serious incidents ([Art. 29 & 62])

Non-EU provider selling into EU

Must appoint an EU authorised representative ([Art. 20]) and still obtain CE mark

How the conformity assessment works

This is outlined in Article 43 and Annex VII:

Route

When used

Who performs It

Internal conformity assessment

For most stand-alone high-risk AI systems

Provider (self-assessment)

Third-party assessment via Notified Body

When AI is part of products regulated under sectoral CE laws or when harmonised standards aren’t applied

Independent Notified Body designated by Member State

Key elements required before CE marking

General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties

Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start

With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach

Aspect

EU AI Act (Reg. 2024/1689)

UK Policy (“Pro-Innovation Framework for AI”, DSIT 2024)

Legal status

Binding Regulation

Non-binding cross-regulator guidance

Conformity marking

Mandatory CE mark for high-risk AI

No equivalent UKCA mark for AI

Oversight

European AI Office + national authorities

FCA, ICO, MHRA, Ofcom, CMA etc.

Assurance mechanism

Conformity assessment (Art. 43)

Voluntary algorithmic assurance pilots (Alan Turing Institute / AI Standards Hub)

International effect

Applies to all AI on EU market

UK companies selling into EU must comply with EU Act

Developer checklist for AI Act CE marking

Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market

✅ Get help from Blocshop

Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems

Book a free consultation with Blocshop to assess your AI system’s compliance readiness.

Learn more from our insights

cover-img

NOVEMBER 3, 2025 • 7 min read

CE marking software under the EU AI Act – who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service.

cover-img

October 19, 2025 • 7 min read

EU and UK AI regulation compared: implications for software, data, and AI projects

Both the European Union and the United Kingdom are shaping distinct—but increasingly convergent—approaches to AI regulation.

For companies developing or deploying AI solutions across both regions, understanding these differences is not an academic exercise. It directly affects how software and data projects are planned, documented, and maintained.

cover-img

October 9, 2025 • 5 min read

When AI and GDPR meet: navigating the tension between AI and data protection

When AI-powered systems process or generate personal data, they enter a regulatory minefield — especially under the EU’s General Data Protection Regulation (GDPR) and the emerging EU AI Act regime

cover-img

September 17, 2025 • 4 min read

6 AI integration use cases enterprises can adopt for automation and decision support

 

The question for most companies is no longer if they should use AI, but where it will bring a measurable impact.