Blog
Industry

Agentic AI: A new threat landscape for payment companies

Nicole Dunn
April 22, 2026
6
min read
IconIcon

Mentions of “AI agents” in underground forums have surged 477% in the past year. Unlike early AI capabilities, which industrialise existing fraud, agentic AI is creating net new attack surfaces that fraudsters are increasingly exploiting. 

Unlike traditional generative AI tools that respond to prompts, agentic systems can autonomously plan, execute multi-step tasks, interact with APIs, and make decisions without continuous human input. 

Increasingly, they’re being given the keys to wallets. By removing historic human bottlenecks, agents enable fraud at a speed, scale, and sophistication previously unattainable. What once required coordinated teams can now be executed by a single AI system, automating manipulation, impersonation, and adaptation in real time. 

The result is a growing asymmetry: attacker capability is compounding, while many fraud monitoring stacks remain static. In payments, that gap presents an existential risk. 

The assumptions breaking beneath us

Modern fraud defence models are built on foundational assumptions that AI is dismantling:

  1. Identity documents are hard to forge.
    High-resolution generative models now produce pixel-perfect synthetic faces, fake IDs, and counterfeit documents in minutes for under $20. 
  2. Liveness checks prove a real human is present.
    Deepfake systems can now mimic micro-expressions, replicate blinking patterns and voice cadence, and generate responsive real-time video. Liveness checks that once provided strong assurance of a human presence can now be spoofed convincingly. 
  3. A human initiates every transaction.
    Traditional fraud models assume a person is making the purchase, with interfaces and authentication designed around human interaction. Agentic AI introduces new “person-not-present” transactions: the transaction is initiated by a machine, authorised indirectly, with purchase intent that may not be human-readable. 
  4. Behavioural patterns are hard to fake.
    Fraud teams have long relied on behavioural signals – typing cadence, mouse movement, session length, browsing patterns – as a rich layer of defence. AI systems can now simulate these actions indistinguishably from legitimate user behaviour. 

This creates an authenticity crisis: fraudulent identities, documentation, and communications are indistinguishable from legitimate counterparts. 

Existing fraud, industrialised 

Every major fraud category that payment companies face has been dramatically amplified in speed, scale, and sophistication. 

This is how each of these fraud vectors are changed by AI: 

Phishing

LLMs generate highly contextual, grammatically flawless communications that mimic legitimate personal and brand writing styles. Conversational agents adapt in real time, building trust with personalised, context-aware dialogue. ‘Fraud-as-a-Service’ kits now include breached data, templates, and automated scripts. This enables attacks at an industrial scale, with AI-driven scams now accounting for more than half of digital financial fraud. 

Deepfake social engineering

Voice cloning and video deepfakes enable real-time impersonation of executives, vendors, and support staff with compelling natural pauses, emotional cues, and background sounds. This has resulted in a >2,000% increase in deepfake attacks in three years, with one in fifteen identity fraud attempts including a deepfake avatar. 

Synthetic identity fraud

AI combines real and fabricated personal information into coherent personas, fabricating employment records, addresses, and deep financial histories that pass KYC. These identities are increasingly difficult to distinguish from legitimate customers, and don’t trigger the artifacts traditional systems are trained to catch. 

Document forgery

Scam merchants pass onboarding compliance checks with legitimate-looking names, category codes, websites, documentation, and operational facades. Consumer digital document forgery is up 200%+ year-on-year, as generative AI produces more compelling fake IDs. 

Account takeover 

AI automates credential stuffing at scale, adapts in real-time to security challenges, and uses deepfakes to bypass voice/video verification on customer service calls. Even highly trained support agents can't reliably detect deception. 

The new attack surfaces agents introduce

Beyond amplification, agentic systems create new risks that existing systems are not built to handle: 

Rogue agent exploitation

Criminals can breach systems to reconfigure agent objectives, changing purchase parameters, payment destinations, or authorisation thresholds. An agent authorised to ‘buy office supplies under $500’ could be manipulated through prompt injection to purchase high-value goods shipped to a different address. Critically, this attack happens in the agent’s upstream decision logic, not the underlying payment rail. 

Agent impersonation and spoofing

Malicious bots can disguise themselves as legitimate AI shopping agents when interacting with merchants. Visa has reported a 25% increase in malicious bot-initiated transactions over a six-month period, rising to 40% in the US. These fake agents can pass automated security checks, offer below-market prices to attract real agents, and harvest payment credentials during the transaction.

Counterfeit merchant ecosystems 

Fraudsters are building entire synthetic merchant ecosystems: fake profiles, fake brand websites, ghost identities. AI shopping agents, designed to find the best deal, are particularly vulnerable to these setups. Once an agent completes a purchase with stored credentials, the fake merchant harvests the consumer’s payment data for unauthorised transactions downstream.

Agent objective poisoning 

If the instructions guiding an external AI agent can be tampered with (through prompt injection, manipulated data feeds, or compromised APIs) the agent becomes the weapon. Financial institutions cannot currently verify the underlying objectives of an agent interacting with their systems. Since agents are specifically designed to emulate human behaviour, distinguishing a legitimate agent from a poisoned one is extraordinarily difficult. This undermines cybersecurity controls, identity verification, and fraud detection tools designed for human users. 

Autonomous multi-step fraud 

AI agents can now orchestrate entire fraud kill chains autonomously: reconnaissance, phishing, credential harvesting, lateral movement, and extraction. What once required coordinated teams operating across time zones can now be executed by a single system that adapts in real time to victims’ responses and targets’ defences. 

Machine-to-Machine fraud without clear liability

If an AI shopping agent is tricked by a synthetic merchant into harvesting payment data, who is responsible? Current legal and financial frameworks are designed for human accountability; they break when the actor is an autonomous script.

If an autonomous agent is socially engineered, does Strong Consumer Authentication still apply? Does intent exist if no human directly executed the transaction? How should chargeback frameworks interpret machine-mediated consent? These questions expose structural gaps in regulatory and liability models designed for human commerce.

What breaks in the traditional stack? 

Payment companies and merchants have typically relied on a layered monitoring approach: upfront identity verification, rules engines catch known patterns, machine learning models score transactions for risk, manual review teams investigate flagged cases, and a final decision is made. 

Rules Engine → ML Scoring → Manual Review → Decision


Each layer is under stress: 

  • Rules engines are trivially evaded by AI agents that can probe detection thresholds systematically, then calibrate attacks to remain just below them.
  • Scoring models trained on historical fraud patterns struggle to detect new fraud typologies created by AI. Worse, synthetic identities that mimic legitimate behaviour contaminate training datasets, compounding false negatives over time and degrading the very models designed to catch them. 
  • Manual review is overwhelmed. Alert volumes have surged 800%+, but fraud hiring and upskilling can’t keep pace. The fidelity of AI-generated content renders manual reviews increasingly ineffective, and agentic transactions lack the human behavioural signals reviewers typically assess. 
  • Identity verification is increasingly unreliable as static attributes (ID numbers, dates of birth, addresses) become poor proxies when data breaches are ubiquitous, mimicry is high-fidelity, and synthetic identities are coherent. 

The emerging defence stack 

This creates an asymmetry between targets and attackers: the signals that defenders depend on are rapidly degrading, while attack quality is improving. Traditional monitoring can no longer wait for a human to review a flag while an AI agent executes a multi-step fraud kill chain in milliseconds. 

Payments companies must rethink fraud defence across four dimensions:  

1. Point-in-time to continuous verification

Defence systems must shift to continuous, session-level risk assessment that re-evaluates trust throughout the customer journey, and include network relationships and cross-platform patterns beyond initial onboarding.

2. Human-centric to agent-aware 

Transaction monitoring must bifurcate into human-initiated and agent-initiated flows, with tailored risk models for each identity type. Defence systems need to evaluate agent provenance, decision trails, and interaction patterns for machine-initiated transactions, while retaining biometric and behavioural signals for human commerce. 

3. Single-layer to multi-modal detection

Identity, behavioural, network, device, and transactional signals must be cross-referenced dynamically, combining transaction patterns with entity-level risk scoring, and correlating signals across payment rails that were historically siloed. 

4. Reactive rulesets to agentic defence

If fraud is being designed and executed by AI agents, it must be defended by systems that adapt at the same speed, including AI-powered anomaly detection and risk scoring dynamically baselined to historic and segment-level behaviour. 

Defending against agentic fraud requires systems designed for dynamic, cross-rail intelligence.

This is the architecture that Orca has been building. Our AI-driven fraud monitoring and intelligence platform was designed for environments where these challenges are most acute: emerging markets with wallet-first payment rails and fast-evolving fraud patterns. 

Across 70+ countries, Orca uses custom ML models to detect new fraud patterns and network linkages as they emerge, with AI-prioritised review queues that allow teams to focus where intervention matters most. 

As agentic fraud reshapes the threat landscape, context-aware, adaptive monitoring is no longer optional. In payments, trust is the product. Right now, AI is eroding it faster than most companies are rebuilding it. The cost of inaction is fast becoming the cost of relevance.

Agent-mediated commerce is arriving faster than most fraud stacks are evolving. If your organisation is assessing how to adapt monitoring, identity, and liability frameworks for this shift, Orca works with payment leaders globally to design defence architectures built for machine-speed fraud.

Let’s chat to assess whether your stack is ready. Get in touch with us.

Share this post
IconIconIconIcon

Check out our latest posts

Orca Fraud has raised $2.35 million in seed funding to advance its real-time transaction monitoring and fraud intelligence capabilities across Africa and other emerging markets.
Orca Team
March 8, 2026
4
min read
Orca Fraud has been selected for the sixth cohort of the Visa Accelerator Program, a 12-week pan-African initiative supporting fintechs building next-generation payment infrastructure. The program offers a chance to learn from Visa's decades of experience in card risk management while contributing insights from wallet-first, emerging markets—where fraud moves fast, crosses borders, and threatens the trust that underpins financial inclusion.
Orca Team
January 27, 2026
5
min read
Discover how Ozow, one of South Africa’s leading payment providers, is partnering with Orca Fraud to embed real-time fraud orchestration across its platform, protecting millions of transactions, empowering merchants to scale safely, and strengthening the trust foundation of South Africa’s digital economy.
Orca Team
October 29, 2025
7
min read

Make better risk decisions without sacrificing approval rates.

Enable your business to prevent fraud before losses are incurred.