Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Ten risks in Nigeria’s new AML rules and what banks must do about them
In Part One, we established why the CBN’s new Baseline Standards for Automated AML Solutions rank among the world’s best. Here, we examine the risks those Standards create and the hard governance work that genuine compliance requires.
A regulatory framework is only as valuable as the quality of its implementation.
The CBN has been explicit on this point from the opening pages of its new Baseline Standards – they are designed to ensure “demonstrable effectiveness and not merely feature-based compliance or vendor-driven implementation”.
MoreStories
The boardroom blind spot: Why Nigerian organisations must govern AI before AI governs them
April 8, 2026
How fraud drains millions overnight: Why Nigerian banks are losing the race against real-time crime
April 7, 2026
That phrase is both an aspiration and a warning. It tells institutions precisely what the CBN will be looking for when it examines compliance and what will not satisfy it.
What follows is an analysis of the ten most significant risks embedded in the new framework, explained in terms that non-technical readers can follow, with the supporting detail and specific Standards references that Compliance Officers and Risk Managers need to act on.
Jump to section
10. Algorithmic Bias
AI models used for customer risk scoring draw on attributes the Standards explicitly reference – geography, occupation, declared income, transaction channel and customer segment (§5.5a.iv). These variables can act as proxies for demographic characteristics.
A model trained predominantly on urban, formally employed, high-income customers will systematically score customers outside that profile as higher risk – not because they are, but because their behaviour looks statistically unfamiliar to the model.
In Nigeria’s context, the practical implications are significant. The country’s financial system serves extraordinary customer diversity – informal traders, agricultural producers, diaspora remittance recipients and mobile money users whose transaction patterns bear no resemblance to a Lagos salary earner. Bias here is not merely an ethical concern; it is a legal one.
The Nigeria Data Protection Act (NDPA) 2023 confers rights on individuals in relation to automated decisions that significantly affect them. Institutions that cannot demonstrate equitable treatment across their customer base carry regulatory and legal exposure that compounds over time.
The Standards require fairness audits and bias testing as part of annual independent model validation (§5.5b.i). What they do not yet specify is a fairness metric, a testing methodology or an acceptable disparity threshold – a gap that institutions must fill in their own governance frameworks.
What institutions must do – Before any AI model is deployed, define the customer dimensions to be tested – at a minimum geography, income band, business type and transaction channel.
Run disaggregated performance analysis across each dimension before go-live and at every validation cycle. Document adverse findings and remediation steps. Report fairness metrics to the Board Risk Committee as a standing agenda item, not as an appendix.
Jump to section
10. Algorithmic Bias
Page 10 of 10
Previous 10987654321 Next