AI Failures in Financial Decision-Making: What Happens Next
- Feb 17
- 3 min read

Artificial intelligence (AI) remains a dominant force in the fintech industry, with firms continually seeking innovative ways to integrate the technology into their operations for a competitive edge. As The Fintech Times explores AI trends this February, we examine the impact of AI failures on financial decision-making and whether firms risk becoming too dependent on the technology.
The Need for AI Monitoring to Address Failures Early
Maya Mikhailov, CEO at SAVVI AI
Maya Mikhailov, CEO at SAVVI AI, emphasizes that AI implementation alone is insufficient—continuous monitoring is necessary to ensure optimal performance.
“There are several types of AI failures in financial decision-making, including bias from flawed historical data, data drift from outdated models, and unforeseen ‘black swan’ events,” Mikhailov explains.
For instance, models trained on biased historical data can perpetuate poor past decisions. Additionally, shifts in economic conditions—such as fluctuating interest rates—can render a previously effective model obsolete if it is not retrained. Finally, unexpected global events, like the COVID-19 pandemic, can create scenarios AI has never encountered, leading to inaccurate predictions.
“To prevent and correct errors, financial institutions must implement back-testing, guardrails, and continuous retraining. Among AI applications, machine learning (ML) is the most established in finance, making firms more adept at managing ML outcomes and failures.”
The Cost of Over-Reliance on AI
James Francis, CEO at Paradigm Asset Management
James Francis, CEO at Paradigm Asset Management, highlights the risk of excessive dependence on AI, which can drain company resources and lead to financial losses.
“Even the most sophisticated AI can fail—like a computer crashing mid-game,” he notes. “I’ve seen businesses rely too heavily on AI, neglecting human oversight. At Paradigm, we ensure AI assists rather than dominates decision-making, blending intelligent technology with human expertise.”
Francis underscores the importance of balancing AI with human judgment: “While AI in finance is exciting, we must remember that even robots need a human partner.”
The Risk of Excluding Deserving Customers
Yaacov Martin, Co-founder and CEO at Jifiti
AI has revolutionized customer experiences, particularly in lending, where it can streamline credit assessments and personalize loan offers. However, Yaacov Martin, CEO at Jifiti, warns that AI mismanagement can inadvertently exclude deserving borrowers.
“AI's reliance on historical data and lack of subjective human oversight can reinforce biases, leading to unjust lending decisions,” Martin explains. “Without proper oversight and regulation, financial institutions risk privacy concerns and unfair lending outcomes.”
To mitigate risks, Martin advocates for periodic human review and strong regulatory frameworks to ensure fairness, transparency, and ethical AI implementation in lending.
Navigating AI Implementation with Expert Partnerships
Vikas Sharma, Senior Vice President at EXL
Vikas Sharma, senior vice president at EXL, stresses that companies cannot become AI experts overnight. Partnering with experienced firms can help mitigate AI failures.
“The risks of AI failure in finance—ranging from regulatory breaches to reputational damage—are too significant to ignore,” Sharma says. “Without proper governance, minor AI failures can escalate into systemic instability.”
Sharma advises fintech firms to collaborate with AI specialists to design and implement scalable AI strategies. “A well-structured AI framework, including human oversight, is essential for accountability and risk mitigation.”
Building Robust AI Frameworks for Financial Institutions
Mark Dearman, Director of Industry Banking Solutions at FintechOS
Mark Dearman, director at FintechOS, warns that some financial institutions are reducing human risk management teams, creating dangerous gaps in AI oversight.
“Automation bias—where humans blindly trust AI decisions—can lead to costly errors,” Dearman explains. “To counteract this, institutions need robust AI governance, including stringent testing protocols and clear accountability structures.”
Regulatory bodies are increasing scrutiny of AI systems, emphasizing transparency and human oversight. “AI should enhance, not replace, human decision-making,” Dearman concludes. “Striking the right balance between AI capabilities and human intervention is the key to avoiding failures.”
Comments