Agentic AI and the rewiring of payment regulation

Autonomous AI agents are beginning to initiate payments. Existing authentication, liability and data governance frameworks are now being tested in real time
Agentic AI systems, configured to act autonomously on the basis of dynamic reasoning with little or no human input, are beginning to reshape elements of online commerce. As the UK Information Commissioner’s Office has recently noted, such systems are already influencing how some consumers shop. For retailers and payment service providers (PSPs), this evolution raises substantive questions around user experience, regulatory compliance and risk allocation.
Some market participants have already responded. Certain online platforms have restricted the use of AI agents, while others have moved to enable agent-driven transactions, including adjustments to fraud and chargeback practices. These developments bring into sharper focus how existing payment regulation applies where transactions are initiated by autonomous systems.
Strong customer authentication and delegated authority
Agent-initiated payments challenge traditional assumptions embedded within the strong customer authentication (SCA) regime. The SCA framework requires that the payer is authenticated using at least two independent elements drawn from knowledge, possession or inherence, and that the payer is made aware of the payment amount and the payee.
Where an AI agent initiates a transaction, the question arises: whose identity is being authenticated, the consumer’s, or the agent’s delegated authority? Many SCA mechanisms presuppose direct human involvement, such as biometric verification. An autonomous purchasing model complicates that assumption.
There are, however, potential routes to compliance within the existing framework. Credentials reflecting an initial consumer authentication may be tokenised and used by an AI agent to execute subsequent payments. Alternatively, SCA processes may be completed at onboarding, with the consumer authorising the agent to act thereafter. The ‘trusted beneficiaries’ exemption, permitting customers to whitelist specific merchants so that future payments do not require further authentication, may also be relevant in certain contexts.
In each scenario, the agent relies on authentication originally provided by a human user, but thereafter determines what to purchase, initiates the transaction and triggers the payment flow. The regulatory sufficiency of such models will depend on whether the authentication and consent architecture satisfies SCA requirements in substance as well as form.
User experience and regulatory exposure
Any reconfiguration of payment processes must account not only for compliance but also for user experience. Agentic AI commerce is premised on reducing human input. If users are required to confirm every discrete action, the efficiency gains of autonomous systems diminish.
At the same time, payment protocols must accurately interpret the agent’s action as reflecting the user’s intent. Shared technical standards may be necessary to enable AI agents to transact seamlessly across multiple merchants, rather than requiring bespoke integrations for each retailer.
Security architecture must operate behind the scenes. Authentication, credential management, tokenisation and fraud detection must function without requiring repeated user intervention. Poorly designed consent journeys risk failed payments, abandoned transactions and potential regulatory exposure.
Allocation of liability in a four-actor ecosystem
The introduction of agentic AI also complicates traditional liability allocation models. In conventional payment flows, responsibility typically sits across three actors: consumer, PSP and merchant. Agentic commerce introduces additional layers, including model developers and deploying businesses, each with distinct capabilities and risk profiles.
There is currently no specific guidance addressing liability where an AI agent is involved in payment initiation. Uncertainty may arise if an agent over-orders, pays the wrong merchant or misinterprets consumer instructions.
Financial services and retail contracts may therefore require reconsideration. Agreements may need to define when an agent is deemed to be acting “on behalf of” the consumer, establish loss-sharing mechanisms for unauthorised or erroneous agent actions, and incorporate indemnities and service levels reflecting model behaviour rather than focusing solely on availability or settlement metrics. Absent such clarity, disputes and recovery challenges are likely to increase.
Data governance and auditability
Agentic AI systems ingest and process data from multiple sources at scale and speed. This intensifies governance obligations beyond those typically associated with payment processing alone. Retailers and PSPs enabling agent-driven transactions are likely to require data protection impact assessments.
Compliance with purpose limitation and data minimisation principles may require reassessment where behavioural, transactional and contextual data are combined. Additional risks arise in relation to international data transfers by downstream model providers, use of customer data in agent training, and potential data leakage where agents interface with multiple platforms.
Auditability is also critical. Organisations enabling agent-driven payments may need to maintain detailed logs capturing the agent’s inputs and outputs, the payment instruction generated, and timestamped traceability of decision steps. Demonstrable provenance, evidencing how an agent reached a particular action, may be expected by regulators. Technical and contractual mechanisms enabling immediate suspension of agent activity may also be necessary where behaviour becomes anomalous or high risk.
