In the Johannesburg suburbs, 45-year-old Grace Ndrov saw her kiosk shelves half open during the late period. Without a formal credit history, the bank won't lend her. She had no choice but to wait last year until she secured her first loan through MTN's mobile money platform MOMO and Fintech's company Jumo.
Qwikloan, a so-called artificial intelligence (AI)-powered loan service, offers short-term credits ranging from R250 ($13) to R10,000 ($540) that evaluate applicants based on mobile money transactions, airtime purchases, and repayment patterns. It takes a few minutes to approve.
“I didn't think I was eligible for a loan,” says Ndlovu. “When you need stock on a busy weekend, I can now make money faster.”
All over Africa, AI-led credit scoring is reshaping financial access. Over 350 million adults remain locked out of formal banks, but digital lenders are approving loans on an unprecedented scale.
This technology is creating opportunities, but it is causing concern. Borrowers often have little thought into how data will be used in a credit model that takes everything from phone usage patterns and personal messages to social media activity and triggers scrutiny from regulatory authorities.
Fast approval
Securing a loan once meant visiting a bank, filling out a form and waiting for a few weeks. Without collateral or credit history, it was almost certain that he would be rejected. Then came the mobile money. Since the launch of M-PESA in Kenya in 2007, mobile wallets have revolutionized trading, allowing you to send and receive millions of money without a bank account. However, as mobile money became more established, data trails were also kept. This recorded how people refilled airtimes, bills paid, and transfers. Lenders are beginning to attract attention.
By 2012, Kenya's M-Shwari was a pioneer in mobile lending, which used transaction history and call records to evaluate borrowers. It was a breakthrough, but AI pushed the model even further with companies like MTN. (Photo: CEO Ralph Mupita)
“We are pleased to announce that we are committed to providing a range of services to our customers,” said John Mark Ssebunnya, general manager of Fintech Architecture at MTN Group.
“Today, we can now deploy large-scale language models (LLMS) to analyze more complex data sources and make decisions in seconds.” AI models now analyse SMS content for financial behavior. Messages discussing debts can contribute to borrowers' profiles. Even informal loans can leave digital tracing.
Beyond text data, lenders integrate social media insights, subject to user consent.
“If allowed, lenders can extract valuable behavioral signals,” says Ssebunnya. Open banking has expanded its model further, allowing financial institutions to evaluate borrowers using bank records, utility payments, and even pay-TV subscriptions.
Last year, MTN's lending business was issued to over 5 million people with loans ranging from $1.4 billion to $1.6 billion, says Ssebunnya. Ten years ago, that level of financial inclusion was not considered.
For small and medium-sized businesses, these micro and nanoloans are lifelines. “Retailers who run grocery stalls with only $200-$500 inventories may seem small, but access to high-speed credit means they can be replenished regularly, and they can bring their business to light,” says Ssebunnya.
Can AI lend money fairly?
But as AI-driven lending expands, so does risk. The very system designed to expand financial inclusion can easily deepen inequality or leverage vulnerable borrowers.
“One of the biggest data privacy concerns is the source of data that AI-driven lending systems use for credit scoring,” said Nanjira Sambuli, a technology policy researcher in Kenya.
“Some people are concerned, such as using social media profiles or using potential lenders' online profiles to assess their creditworthiness. The other is the unsolicited prompt for “instant loans.” Much of this is developed based on trends observed around financial behavior and based on trading datasets regarding the use of platforms such as mobile money,” Sambuli said.
Dynamics of informed user consent – Transactional data or contact details are skewed, especially in the African market. Sambri adds:
The meaning is strict. Those without the risk exclusion of digital footprints can be attacked with unsolicited loan offers, often fueled by invisible algorithms tracking financial habits.
Bias is another risk. The AI model is as fair as the data being trained. Without protection, they can strengthen existing inequality and support certain borrowers over others.
Tausi Africa, a Tanzanian fintech company behind AI credit scoring platform Manka, is taking steps to address concerns about lending bias.
“Our AI doesn't use metadata like gender or race,” said Baadell, director of research and development at Manka. “We will focus on transaction patterns and affordability and remove any factors that can introduce bias.”
Tausi Africa says it has incorporated the Gender Lens Investment (GLI) framework and ethical AI principles into model development. The aim of this approach is to identify systematic barriers to financial access, refine the algorithms, and actively promote inclusion, especially for women and young people.
Alternative data sources such as utility payments and financial obligations are transforming credit valuations.
“Kiosk owners show that they regularly pay their electricity and water bills through mobile money, indicating their financial discipline and commitment to repetitive obligations,” Baadel says. “These payments also help you verify your proof of residence — whether the renter owns or rents a home or business space provides additional context for risk assessment.”
This also introduces additional risks if loan approval rates and financial inclusion improve. An unconfirmed AI model warns Samburi that it can capture borrowers in a debt cycle.
“Unexplainable, inaudible, AI-driven credit scoring can fully escalate financial inequality and even put financial health at risk. If borrowers are aiming to “keep their borrowing from Peter and pay poles,” the system is failing them.”
Possibility of financial inclusion
M-Kopa, which provides digital financial services to bank consumers, says it uses Microsoft's AI services to assess lending risk and improves its financial forecasts. It processes over 500 payments per minute, allowing 3M Africans to access solar systems, digital loans, health insurance and smartphones. The AI-driven model says it led to 440,000 additional credit lines for customers after successful repayments. It also argues that predictive analytics can enhance financial access while managing risk.
Fraud detection is also an important AI feature. “Take the population to Nigeria, which is over 200m,” says Ssebunnya. “Even if we focus only on mobile users, it's a 100m individual who generates transactional data. AI is the only way to detect large anomalies. For example, if someone inserts 10 different SIM cards into their phones in a month, the AI can assess whether it agrees to a scam or a legitimate use case.”
Regulators left
Regulators are making a catch-up. The central bank in Kenya has introduced licenses to digital lenders, while the Federal Competition and Consumer Protection Commission in Nigeria has cracked down on predatory lending. However, enforcement continues to be consistent, with many AI-driven credit models operating in the regulated grey area.
“Accountability for scoring systems is a policy priority and a regulatory requirement. This can be achieved through regular audits, both through self-assessment by businesses and the use of datasets to researchers and civil society,” says Sambri.
Though stricter regulations on digital lending have been established in markets like Kenya and Nigeria, regulators are struggling to cope with the rapid evolution of AI-driven credit systems. Actual testing involves adapting policies to technological advances, while ensuring consumer protection.
“The challenge for the future is to balance innovation and accountability,” says Ssebunnya. “AI models are as good as trained data. AI has helped democratize access to credit, but we cannot ignore the risk of bias. The key challenge is to ensure that AI models do not strengthen existing inequality.”