AI disruption will challenge lending decisions in coming years, Goldman exec says - Reuters

March 06, 2026 | By virtualoplossing
AI disruption will challenge lending decisions in coming years, Goldman exec says - Reuters

Table of Contents

Another Shiny Ball? Goldman's Latest Revelation.

So, a Goldman exec thinks AI will challenge lending? Bless their heart. It’s almost quaint, isn't it? Like they just discovered fire. For two decades, I’ve seen enough "paradigm shifts" and "disruptive innovations" to wallpaper a small country. Every few years, some bright-eyed MBA, usually fresh out of a program that taught them more about synergy than actual risk, prances in with a new buzzword, convinced they’ve cracked the code. AI, GenAI, Machine Learning—call it what you want. It's just another tool. A fancy hammer, maybe, but still a hammer. The fundamentals of lending? They haven't changed since the Babylonians figured out interest rates. People borrow, people lend, some pay back, some don't. That’s the game. And believe me, AI ain’t changing human nature.

The reality is, we’ve been trying to automate lending decisions since the dawn of credit scoring. FICO scores were the first shot. Then came the big data plays. Now it's this. And every time, we run into the same brick walls, only now they’re draped in more complex, harder-to-audit algorithms. The idea that AI is some sort of magic bullet, a panacea for all the messy, subjective bits of financial risk? Total nonsense. But we buy it anyway. Because hope, and the promise of fatter bonuses from “efficiency gains,” springs eternal.

The Ghost in the Machine: Who's Responsible When the Wheels Come Off?

Here's the rub. When a human underwriter makes a bad call, you can fire them. You can retrain them. You can point fingers. When an AI makes a bad call—and it will make bad calls, trust me—who takes the fall? The algorithm? The data scientist who built it, half-asleep on three Red Bulls? The CEO who greenlit the project without understanding an ounce of its operational risk? This isn't some harmless little bug in your BSS/OSS system; this is peoples' livelihoods. This is the difference between a family getting a mortgage or being stuck renting forever.

The models we’re building are opaque. They’re black boxes. Try explaining why a large language model decided to approve a high-risk loan applicant to a regulator. Go on, I'll wait. "Because the embedded weights in the multi-layer perceptron, after processing terabytes of historical transaction data and social media sentiment, indicated a 0.03% higher probability of repayment than a similar profile rejected last week." Yeah, right. Regulators want accountability. They want interpretability. They want to know exactly why. And most of these advanced AI systems? They struggle with "why." They spit out correlations, not necessarily causations. It's like asking a pigeon why it picked the winning lottery numbers; it just did. This inherent lack of transparency is a ticking time bomb.

  • The regulatory nightmare is just starting.
  • Explainable AI (XAI) is a nice academic concept, but often a corporate fig leaf.
  • True accountability remains elusive when the "decision maker" is code.

The Data Swamp: Garbage In, Gospel Out

Everyone talks about data being the new oil. More like data is the new raw sewage, and we're all swimming in it. AI models, especially the ones claiming to predict complex human behavior like defaulting on a loan, are only as good as the data they're fed. And the data? It's a mess. It's biased. It's incomplete. It's riddled with historical prejudices. You think a machine learning model is suddenly going to be free of human bias when it's trained on decades of decisions made by humans who were absolutely, demonstrably biased?

We're talking about legacy systems that barely talk to each other, data silos so deep you need an ROV to find them, and data entry errors that would make a tax auditor weep. Banks are still dealing with latency issues pulling data across ancient infrastructure, let alone building real-time, pristine feeds for AI. They spend massive CAPEX on data lakes that turn into data swamps. Then they throw an AI at it, expecting miracles. What you get are LLM hallucinations on a financial scale. The model confidently predicts, based on flawed or incomplete data, something utterly detached from reality. And because it's a computer, we just assume it's right. Because the numbers. The numbers don't lie, right? Except when they do, because the source is compromised.

  • Data quality is the Achilles' heel of AI in finance.
  • Historical bias gets baked directly into the algorithms.
  • The cost of cleaning and maintaining data often outweighs the promised AI benefits.
  • Garbage in, gospel out. Always.

Regulation's Relentless Ride

And then there are the regulators. Good luck convincing the SEC or the CFPB that your shiny new AI is "fair" and "unbiased" when you can't even trace its decision-making logic. They’re not interested in your fancy neural networks; they want auditable trails. They want consumer protection. They want equal opportunity. They want to prevent another redlining scandal, only this time powered by algorithms. The legal frameworks simply haven't caught up. We’re building super-fast digital cars on analog roads. The inevitable crashes are going to be spectacular.

Every time we push the envelope, regulators drag their feet, then overcorrect with a sledgehammer. Remember when everyone thought algorithmic trading was the future of all markets? Then came the flash crashes, the unintended consequences, the systemic risks nobody accounted for. Now, imagine that, but for every loan, every credit card, every small business line of credit. The regulatory burden on demonstrating fairness, transparency, and non-discrimination for AI models is going to be immense. And expensive. The juice isn't worth the squeeze for a lot of smaller players, and even the big ones will find their ARPU eaten away by compliance costs.

The Unquantifiable Human Element

Look, lending is ultimately about trust. And empathy. A computer can process a million data points, but it can’t sit across from someone, hear their story, understand their desperation, or their genuine desire to make good on a promise. It can’t see the determination in their eyes. It can’t make an exception for a first-time small business owner with a solid plan but no track record. It can’t assess character. These are soft factors, sure, unquantifiable in a spreadsheet, but they are absolutely critical. Any underwriter worth their salt knows this.

We’re talking about taking human judgment, honed over decades of painful experience and countless unique situations, and trying to distill it into a series of if-then statements or a probabilistic algorithm. It's a fool's errand. We tried it with expert systems in the 90s, remember those? Didn't work then, won't work now. Not completely. AI will absolutely make things faster, more efficient, especially for the vanilla, low-risk stuff. But for anything complex, anything nuanced, anything that requires a leap of faith or a genuine understanding of someone’s situation? You still need a human. The idea that AI is going to replace human judgment entirely? That’s for the folks still drinking the Kool-Aid. It's a powerful tool, no doubt, but it's not a substitute for wisdom.

  • Emotional intelligence and empathy are irreplaceable in complex lending.
  • AI excels at pattern recognition, not nuanced human understanding.
  • The "exceptions" that build relationships and communities are often based on human judgment.
  • Edge computing might speed up decision delivery, but it won't imbue the AI with common sense.

The Blunt Truth: Your AI Lending FAQ

Will AI truly replace human underwriters?

The Blunt Truth: Not entirely. It'll eat the grunt work—the low-risk, high-volume approvals. The complex, messy cases? The ones that actually require judgment and empathy? Those will still need a human. They’ll just have fancier tools to inform their decision. Expect more automation, fewer human underwriters overall, but the ones left will be handling the really tough stuff.

  • Quick Fact: AI is excellent at pattern matching; terrible at intuition.
  • Red Flag: Any company promising 100% automated underwriting for all loan types is selling snake oil.
Is this just another tech bubble that's going to burst?

The Blunt Truth: Parts of it, absolutely. The hype around "transformative AI" is off the charts, fueled by venture capital and unrealistic expectations. Some startups will crash and burn spectacularly. But the core technology—machine learning for data processing and predictive analytics—is here to stay. It's the application and the unrealistic promises that are bubbly.

  • Quick Fact: Every major tech wave has an irrational exuberance phase.
  • Red Flag: Beware of "AI solutions" that lack clear, measurable ROI or depend solely on proprietary black-box algorithms.
How can we trust an AI not to discriminate?

The Blunt Truth: We can't, not inherently. AI models learn from historical data, which is rife with historical biases. Unless you meticulously audit the data, carefully engineer the features, and constantly monitor for disparate impact, your AI will simply replicate and even amplify existing prejudices. It's a monumental, ongoing task, not a set-it-and-forget-it solution.

  • Quick Fact: Bias in AI is a feature, not a bug, if the training data is biased.
  • Red Flag: "Our AI is unbiased because it's a computer" is the most dangerous claim you'll hear.
What’s the biggest hidden cost of adopting AI in lending?

The Blunt Truth: Data hygiene and ongoing model maintenance. Everyone focuses on the flashy algorithm. Nobody talks about the brutal, continuous effort of cleaning, structuring, and updating the underlying data. Then there's the constant monitoring for model drift, regulatory changes, and the sheer computational power needed. It’s not a one-time setup; it’s a never-ending, incredibly expensive operational burden. And don't forget the legal and compliance overhead, which is only going to grow.

  • Quick Fact: Data scientists spend 80% of their time on data preparation.
  • Red Flag: Low-cost "AI solutions" often punt on the true cost of data and long-term maintenance.

Parting Shot

So, the Goldman exec is right. AI will challenge lending decisions. It’ll challenge the hell out of them. But not in the utopian, efficiency-driven way they're probably imagining. It'll challenge our understanding of accountability, our ability to manage data that's more sludge than gold, and our willingness to grapple with the uncomfortable truth that machines, for all their power, can be just as biased, just as flawed, and just as destructive as the humans who build them. The next five years? We're going to see some incredible tech, sure. But we're also going to see some equally incredible screw-ups, all justified by the magic word: AI. Buckle up, it's going to be a bumpy ride, and the only certainty is that the bill will always come due.