blog

November 3, 2025

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.



Does software really need CE marking under the AI Act?


Short answer is “yes” — if the software falls within the Act’s high-risk scope.


High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.




Who is responsible for CE marking?


This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):



How the conformity assessment works


This is outlined in Article 43 and Annex VII:



Key elements required before CE marking


General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties


Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start


With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach


Let’s compare how EU and UK approach these issues, based on UK Government Policy Paper: A Pro-Innovation Approach to AI Regulation (2024) and Introduction to AI assurance.



Developer checklist for AI Act CE marking


Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market



✅ Get help from Blocshop


Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems


Book a free consultation with Blocshop to assess your AI system’s compliance readiness.Developer checklist for AI

blog

November 3, 2025

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.



Does software really need CE marking under the AI Act?


Short answer is “yes” — if the software falls within the Act’s high-risk scope.


High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.




Who is responsible for CE marking?


This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):



How the conformity assessment works


This is outlined in Article 43 and Annex VII:



Key elements required before CE marking


General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties


Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start


With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach


Let’s compare how EU and UK approach these issues, based on UK Government Policy Paper: A Pro-Innovation Approach to AI Regulation (2024) and Introduction to AI assurance.



Developer checklist for AI Act CE marking


Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market



✅ Get help from Blocshop


Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems


Book a free consultation with Blocshop to assess your AI system’s compliance readiness.Developer checklist for AI

logo blocshop

Let's talk!

blog

November 3, 2025

CE marking software under the EU AI Act – Who needs it and how to prepare a conformity assessment

From 2026, AI systems classified as high-risk under the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) will have to undergo a conformity assessment and obtain a CE marking before being placed on the EU market or put into service. For many software vendors—especially those operating in finance, healthcare, HR, or public services—this introduces product-safety obligations that until now applied mainly to physical goods.



Does software really need CE marking under the AI Act?


Short answer is “yes” — if the software falls within the Act’s high-risk scope.


High-risk AI examples (Annex III): credit scoring, CV screening, fraud detection, medical diagnosis, critical infrastructure control, education grading, law enforcement risk assessment.




Who is responsible for CE marking?


This is clearly defined for providers, importers, and distributors in Articles 16–25 of the AI Act , and deployers (incl. Arts. 26 & 29):



How the conformity assessment works


This is outlined in Article 43 and Annex VII:



Key elements required before CE marking


General CE-mark placement rules come from Regulation (EC) No 765/2008, Art. 30 and Decision 768/2008/EC, Annex II.

  • AI risk management system ([Art. 9])
  • Technical documentation file (Annex IV) – system description, data sets, metrics, logging, human oversight plan
  • Quality management system ([Annex VII Pt. 3])
  • EU Declaration of Conformity ([Art. 47])
  • Affix CE mark ([Art. 48]) once compliance is demonstrated

After CE marking: ongoing duties


Post-market monitoring ([Art. 61]), incident reporting ([Art. 62]), and market surveillance under Regulation (EU) 2019/1020 apply. Additionally, providers must store technical documentation for 10 years and co-operate with authorities ([Art. 17 & 61(5)]).

 

When to start


With the AI Act enforcement phasing in from 2026–2027, development and compliance should run in parallel now. CE marking delays can mean product launch suspension and fines up to €35 million or 7 % of global turnover ([Art. 99]).

 

EU vs UK approach


Let’s compare how EU and UK approach these issues, based on UK Government Policy Paper: A Pro-Innovation Approach to AI Regulation (2024) and Introduction to AI assurance.



Developer checklist for AI Act CE marking


Before an AI system can legally carry the CE mark, developers and providers must work through a series of concrete steps defined in the AI Act — from risk classification to technical documentation and final declaration of conformity.

  1. Identify risk category ([Annex III])
  2. Define provider/importer/deployer roles ([Art. 3])
  3. Build risk management workflow ([Art. 9])
  4. Prepare technical documentation ([Annex IV])
  5. Implement logging & versioning ([Art. 12])
  6. Ensure human oversight ([Art. 14])
  7. Verify accuracy, robustness & cybersecurity ([Art. 15])
  8. Draft Declaration of Conformity ([Art. 47])
  9. Affix CE mark ([Art. 48]) → Ready for market



✅ Get help from Blocshop


Blocshop helps organisations plan, build and safely integrate AI into their software systems by:

  • Assessing where AI can be applied in existing products or workflows
  • Designing architectures that meet both technical and regulatory requirements
  • Preparing the documentation, risk analysis and governance models needed for responsible AI deployment
  • Supporting teams through implementation, testing and integration with enterprise systems


Book a free consultation with Blocshop to assess your AI system’s compliance readiness.Developer checklist for AI

logo blocshop

Let's talk!