blog

April 8, 2026

Why AI slows down digital banking delivery before it speeds it up

AI can shorten time to first implementation, but in banking products, it often lengthens time to release. The reason is that a feature that looked small in a prototype becomes much larger once it has to move through internal data access, entitlement logic, testing, release approval, audit evidence, and change management.


That trade-off matters more now because the Digital Operational Resilience Act (DORA) has applied since 17 January 2025, and because the European Central Bank (ECB) is explicitly treating operational resilience, ICT capabilities, third-party risk, change management, and banks’ use of AI as supervisory priorities for 2026–2028.



The delivery trade-off is not abstract


The useful way to think about AI in a banking product is not “does it work?” but “which delivery metrics improve first, and which get worse?” Time to first build may improve.


At the same time, the feature can increase the number of systems touched, the number of approval points, the test surface, the amount of release evidence required, and the effort needed to operate the change safely.


That pattern is what happens when a new capability enters an environment where operational resilience, ICT risk management, incident handling, and third-party dependencies are already under heavier scrutiny.



Internal data turns a feature into a platform change


The first serious slowdown is usually data, not model quality.

A prototype can run on a narrow slice of the product. A production feature rarely can. Useful context is spread across customer records, product logic, internal services, support history, operational procedures, and approval workflows.


Once that happens, the feature stops being a local change and becomes a platform change, because the organisation has to decide which sources are authoritative, how current they need to be, what happens when systems disagree, and which data is allowed into the feature at all.


That is why many AI initiatives appear to move quickly at the start and then stall later. The model may be ready, the surrounding data path is not.



Entitlements make the scope wider than the brief suggested


Access control is the next place where the feature grows.

In banking platforms, entitlements are usually shaped by role, account relationship, geography, workflow stage, product scope, support privilege, and internal separation of duties. A normal feature may stay inside one established access boundary, but an AI feature often retrieves, combines, summarizes, or recommends across several parts of the system at once.


That is where delivery operations start pulling in more people and adding more review. The question is then whether the entitlement model still holds once the information path becomes broader and less linear.


The ECB’s current priorities make this pressure easy to understand. They explicitly connect operational resilience to ICT capabilities, third-party risk, cybersecurity, incident response, and ICT change management, while also saying that broader AI adoption requires a structured approach to governance and risk management.



Auditability makes the release path heavier


A feature can be useful and still not be ready for release.

That usually becomes obvious when the conversation turns from “can we build it?” to “can we explain it later?” A serious implementation needs a clear answer to questions such as:

  • what source data shaped the output,
  • what version of the logic or configuration was live,
  • what evidence exists for review, and
  • whether the result can be reconstructed later if challenged.


This is where many pilots lose momentum. The prototype proves that the use case is possible, not if the use case is governable.


That is not theoretical. DORA is built around ICT risk management, and the ECB is also putting targeted attention to this as well as the prudential implications of AI use.



The test surface gets wider, not just deeper


AI features change the amount of testing needed, but more importantly they change the kind of testing needed.


A conventional change already needs functional and integration coverage. An AI feature usually adds more:

  • behavior under ambiguous input,
  • fallback behavior, retrieval or configuration regressions,
  • safe failure paths, and
  • evidence that the feature respects the control model around it.


That widens the test surface, because the team is not only validating logic but also how the feature behaves across systems, permissions, and operating conditions.


This is one reason the first production implementation is usually the slowest one.



AI exposes weak delivery operations quickly


One of the more useful things AI does is expose delivery friction that was already there.


If the product already has too many handoffs, overloaded architecture review, brittle cross-system dependencies, or slow release governance, AI will bring that to the surface fast. Not because it creates all the drag by itself, but because it touches more boundaries in one go.


That is why early technical progress and later delivery slowdown can both be true at the same time. The implementation moved quickly, the operating model did not, and the regulatory context reinforces that.



The first serious implementation is usually the slowest one


The first meaningful AI feature forces decisions that later features can reuse: which internal sources are in scope, how entitlements apply across retrieval and generation, what release evidence is required, what “safe failure” means, and who owns future changes to the feature once it is live.


Once those decisions exist, the next implementation can move faster because the path no longer has to be invented from scratch.

That is where the speed-up usually starts, because the organisation has already adapted and turned one AI feature into a repeatable production pattern.



What is worth deciding early


The organisations that get through this with less friction usually make a few decisions earlier than others. And you can too:


  1. Choose a bounded use case.
  2. Keep AI in an advisory role before letting it affect harder execution paths.
  3. Define which sources are in scope and which are out.
  4. Settle entitlement boundaries before retrieval or orchestration grows around the wrong assumptions.
  5. Decide what release evidence will be required before the feature is already waiting for sign-off.
  6. Separate two different moments: a working implementation and a production-ready capability.


That distinction is where a lot of delivery time is won or lost.



Review your AI delivery path with Blocshop


This is usually where Blocshop becomes useful.

Not at the “AI is interesting” stage, but at the point where a concrete feature has to move through architecture, internal data handling, entitlements, testing, release control, and production operation without turning into another source of delivery drag.


If your organisation is adding AI to a banking product, the relevant question is whether the path to production becomes shorter or heavier once data, permissions, evidence, and release operations are treated seriously.


Blocshop helps review that path before the integration becomes another bottleneck.

blog

April 8, 2026

Why AI slows down digital banking delivery before it speeds it up

AI can shorten time to first implementation, but in banking products, it often lengthens time to release. The reason is that a feature that looked small in a prototype becomes much larger once it has to move through internal data access, entitlement logic, testing, release approval, audit evidence, and change management.


That trade-off matters more now because the Digital Operational Resilience Act (DORA) has applied since 17 January 2025, and because the European Central Bank (ECB) is explicitly treating operational resilience, ICT capabilities, third-party risk, change management, and banks’ use of AI as supervisory priorities for 2026–2028.



The delivery trade-off is not abstract


The useful way to think about AI in a banking product is not “does it work?” but “which delivery metrics improve first, and which get worse?” Time to first build may improve.


At the same time, the feature can increase the number of systems touched, the number of approval points, the test surface, the amount of release evidence required, and the effort needed to operate the change safely.


That pattern is what happens when a new capability enters an environment where operational resilience, ICT risk management, incident handling, and third-party dependencies are already under heavier scrutiny.



Internal data turns a feature into a platform change


The first serious slowdown is usually data, not model quality.

A prototype can run on a narrow slice of the product. A production feature rarely can. Useful context is spread across customer records, product logic, internal services, support history, operational procedures, and approval workflows.


Once that happens, the feature stops being a local change and becomes a platform change, because the organisation has to decide which sources are authoritative, how current they need to be, what happens when systems disagree, and which data is allowed into the feature at all.


That is why many AI initiatives appear to move quickly at the start and then stall later. The model may be ready, the surrounding data path is not.



Entitlements make the scope wider than the brief suggested


Access control is the next place where the feature grows.

In banking platforms, entitlements are usually shaped by role, account relationship, geography, workflow stage, product scope, support privilege, and internal separation of duties. A normal feature may stay inside one established access boundary, but an AI feature often retrieves, combines, summarizes, or recommends across several parts of the system at once.


That is where delivery operations start pulling in more people and adding more review. The question is then whether the entitlement model still holds once the information path becomes broader and less linear.


The ECB’s current priorities make this pressure easy to understand. They explicitly connect operational resilience to ICT capabilities, third-party risk, cybersecurity, incident response, and ICT change management, while also saying that broader AI adoption requires a structured approach to governance and risk management.



Auditability makes the release path heavier


A feature can be useful and still not be ready for release.

That usually becomes obvious when the conversation turns from “can we build it?” to “can we explain it later?” A serious implementation needs a clear answer to questions such as:

  • what source data shaped the output,
  • what version of the logic or configuration was live,
  • what evidence exists for review, and
  • whether the result can be reconstructed later if challenged.


This is where many pilots lose momentum. The prototype proves that the use case is possible, not if the use case is governable.


That is not theoretical. DORA is built around ICT risk management, and the ECB is also putting targeted attention to this as well as the prudential implications of AI use.



The test surface gets wider, not just deeper


AI features change the amount of testing needed, but more importantly they change the kind of testing needed.


A conventional change already needs functional and integration coverage. An AI feature usually adds more:

  • behavior under ambiguous input,
  • fallback behavior, retrieval or configuration regressions,
  • safe failure paths, and
  • evidence that the feature respects the control model around it.


That widens the test surface, because the team is not only validating logic but also how the feature behaves across systems, permissions, and operating conditions.


This is one reason the first production implementation is usually the slowest one.



AI exposes weak delivery operations quickly


One of the more useful things AI does is expose delivery friction that was already there.


If the product already has too many handoffs, overloaded architecture review, brittle cross-system dependencies, or slow release governance, AI will bring that to the surface fast. Not because it creates all the drag by itself, but because it touches more boundaries in one go.


That is why early technical progress and later delivery slowdown can both be true at the same time. The implementation moved quickly, the operating model did not, and the regulatory context reinforces that.



The first serious implementation is usually the slowest one


The first meaningful AI feature forces decisions that later features can reuse: which internal sources are in scope, how entitlements apply across retrieval and generation, what release evidence is required, what “safe failure” means, and who owns future changes to the feature once it is live.


Once those decisions exist, the next implementation can move faster because the path no longer has to be invented from scratch.

That is where the speed-up usually starts, because the organisation has already adapted and turned one AI feature into a repeatable production pattern.



What is worth deciding early


The organisations that get through this with less friction usually make a few decisions earlier than others. And you can too:


  1. Choose a bounded use case.
  2. Keep AI in an advisory role before letting it affect harder execution paths.
  3. Define which sources are in scope and which are out.
  4. Settle entitlement boundaries before retrieval or orchestration grows around the wrong assumptions.
  5. Decide what release evidence will be required before the feature is already waiting for sign-off.
  6. Separate two different moments: a working implementation and a production-ready capability.


That distinction is where a lot of delivery time is won or lost.



Review your AI delivery path with Blocshop


This is usually where Blocshop becomes useful.

Not at the “AI is interesting” stage, but at the point where a concrete feature has to move through architecture, internal data handling, entitlements, testing, release control, and production operation without turning into another source of delivery drag.


If your organisation is adding AI to a banking product, the relevant question is whether the path to production becomes shorter or heavier once data, permissions, evidence, and release operations are treated seriously.


Blocshop helps review that path before the integration becomes another bottleneck.

logo blocshop

Talk to sales

blog

April 8, 2026

Why AI slows down digital banking delivery before it speeds it up

AI can shorten time to first implementation, but in banking products, it often lengthens time to release. The reason is that a feature that looked small in a prototype becomes much larger once it has to move through internal data access, entitlement logic, testing, release approval, audit evidence, and change management.


That trade-off matters more now because the Digital Operational Resilience Act (DORA) has applied since 17 January 2025, and because the European Central Bank (ECB) is explicitly treating operational resilience, ICT capabilities, third-party risk, change management, and banks’ use of AI as supervisory priorities for 2026–2028.



The delivery trade-off is not abstract


The useful way to think about AI in a banking product is not “does it work?” but “which delivery metrics improve first, and which get worse?” Time to first build may improve.


At the same time, the feature can increase the number of systems touched, the number of approval points, the test surface, the amount of release evidence required, and the effort needed to operate the change safely.


That pattern is what happens when a new capability enters an environment where operational resilience, ICT risk management, incident handling, and third-party dependencies are already under heavier scrutiny.



Internal data turns a feature into a platform change


The first serious slowdown is usually data, not model quality.

A prototype can run on a narrow slice of the product. A production feature rarely can. Useful context is spread across customer records, product logic, internal services, support history, operational procedures, and approval workflows.


Once that happens, the feature stops being a local change and becomes a platform change, because the organisation has to decide which sources are authoritative, how current they need to be, what happens when systems disagree, and which data is allowed into the feature at all.


That is why many AI initiatives appear to move quickly at the start and then stall later. The model may be ready, the surrounding data path is not.



Entitlements make the scope wider than the brief suggested


Access control is the next place where the feature grows.

In banking platforms, entitlements are usually shaped by role, account relationship, geography, workflow stage, product scope, support privilege, and internal separation of duties. A normal feature may stay inside one established access boundary, but an AI feature often retrieves, combines, summarizes, or recommends across several parts of the system at once.


That is where delivery operations start pulling in more people and adding more review. The question is then whether the entitlement model still holds once the information path becomes broader and less linear.


The ECB’s current priorities make this pressure easy to understand. They explicitly connect operational resilience to ICT capabilities, third-party risk, cybersecurity, incident response, and ICT change management, while also saying that broader AI adoption requires a structured approach to governance and risk management.



Auditability makes the release path heavier


A feature can be useful and still not be ready for release.

That usually becomes obvious when the conversation turns from “can we build it?” to “can we explain it later?” A serious implementation needs a clear answer to questions such as:

  • what source data shaped the output,
  • what version of the logic or configuration was live,
  • what evidence exists for review, and
  • whether the result can be reconstructed later if challenged.


This is where many pilots lose momentum. The prototype proves that the use case is possible, not if the use case is governable.


That is not theoretical. DORA is built around ICT risk management, and the ECB is also putting targeted attention to this as well as the prudential implications of AI use.



The test surface gets wider, not just deeper


AI features change the amount of testing needed, but more importantly they change the kind of testing needed.


A conventional change already needs functional and integration coverage. An AI feature usually adds more:

  • behavior under ambiguous input,
  • fallback behavior, retrieval or configuration regressions,
  • safe failure paths, and
  • evidence that the feature respects the control model around it.


That widens the test surface, because the team is not only validating logic but also how the feature behaves across systems, permissions, and operating conditions.


This is one reason the first production implementation is usually the slowest one.



AI exposes weak delivery operations quickly


One of the more useful things AI does is expose delivery friction that was already there.


If the product already has too many handoffs, overloaded architecture review, brittle cross-system dependencies, or slow release governance, AI will bring that to the surface fast. Not because it creates all the drag by itself, but because it touches more boundaries in one go.


That is why early technical progress and later delivery slowdown can both be true at the same time. The implementation moved quickly, the operating model did not, and the regulatory context reinforces that.



The first serious implementation is usually the slowest one


The first meaningful AI feature forces decisions that later features can reuse: which internal sources are in scope, how entitlements apply across retrieval and generation, what release evidence is required, what “safe failure” means, and who owns future changes to the feature once it is live.


Once those decisions exist, the next implementation can move faster because the path no longer has to be invented from scratch.

That is where the speed-up usually starts, because the organisation has already adapted and turned one AI feature into a repeatable production pattern.



What is worth deciding early


The organisations that get through this with less friction usually make a few decisions earlier than others. And you can too:


  1. Choose a bounded use case.
  2. Keep AI in an advisory role before letting it affect harder execution paths.
  3. Define which sources are in scope and which are out.
  4. Settle entitlement boundaries before retrieval or orchestration grows around the wrong assumptions.
  5. Decide what release evidence will be required before the feature is already waiting for sign-off.
  6. Separate two different moments: a working implementation and a production-ready capability.


That distinction is where a lot of delivery time is won or lost.



Review your AI delivery path with Blocshop


This is usually where Blocshop becomes useful.

Not at the “AI is interesting” stage, but at the point where a concrete feature has to move through architecture, internal data handling, entitlements, testing, release control, and production operation without turning into another source of delivery drag.


If your organisation is adding AI to a banking product, the relevant question is whether the path to production becomes shorter or heavier once data, permissions, evidence, and release operations are treated seriously.


Blocshop helps review that path before the integration becomes another bottleneck.

logo blocshop