5
minute read
Mar 5, 2026

The Donut Hole In Modern Lending – What Our CEO And CPO Think Is Missing

Most lending stacks have improved data access and decisioning tools, but outcomes still lag. Carrington Labs’ Chief Executive Officer (CEO) and Chief Product and Commercial Officer explain what’s missing—and how to close the “donut hole” with decision-ready analytics.

More data doesn’t equal better decisions.

Most lenders now have better access to transaction data and more configurable decisioning workflows than they did even a few years ago. Yet many still experience the same friction points: good customers getting declined, thin “yes/no” decisions that don’t translate into right-sized exposure, and early signs of stress showing up later than they should.

In a recent conversation, Carrington Labs Chief Executive Officer (CEO) Jamie Twiss and Chief Product Officer (CPO) and Chief Commercial Officer (CCO) Kasey Kaplan described why: for many lending stacks, the weakest link isn’t data access or decision execution. It’s the layer in the middle—credit risk analytics that turns transaction behavior into decision-ready signals.

They call that missing middle the donut hole.

“The market is crowded where data gets collected and where decisions get executed. The gap is the analytics work required to turn large, messy datasets into decision-ready signals.”

A Common Misunderstanding About What “Cash Flow Underwriting” Is

When teams hear “cash flow underwriting,” they often map it to one of three things:

  • A decision engine that replaces existing policy logic
  • A bureau alternative that replaces credit files
  • A categorization layer that buckets spend and income into neat labels

Carrington Labs pushes back on that framing—not because those layers don’t matter, but because they don’t automatically improve outcomes.

If you want a simple baseline definition you can share internally, see: A 5-Minute Guide to Cash Flow Underwriting

What do people most often mistake Carrington Labs for?

Kasey: People assume we’re a decision engine, a bureau replacement, or a categorization layer. The cleanest correction is that we’re none of those.

Most stacks have tools at the edges—data access and enrichment on one side, decisioning and orchestration on the other. What sits in between is the hard part: translating transaction behavior into signals that a lender can actually use.

Jamie: The missing middle is what allows a lender to answer the full underwriting question: Should we lend, how much, on what terms, and what should we monitor? That middle is where outcomes are often won or lost.

____________________________________________________________________________________________________________________________________________________________________________________________________________

This distinction matters because it keeps decisioning authority where it belongs: with the lender. Carrington Labs positions itself as a modular analytics layer that supports lender judgment—rather than an automated decision-maker.

Learn more about the origination modules lenders plug into existing workflows: Cashflow Score and Credit Risk Model

____________________________________________________________________________________________________________________________________________________________________________________________________________

The Donut Hole Is Where False Declines Quietly Happen

Most lenders can point to the obvious tradeoffs: tighten policy and reduce loss, loosen policy and gain growth. But the more painful outcomes tend to be less visible:

  • declines that don’t need to be declines
  • approvals that should have been smaller or structured differently
  • risk signals that arrive after delinquency has already started

That’s not always a “data problem.” It’s often an analytics translation problem: transaction-level reality isn’t being converted into stable, explainable indicators that can be used inside policy and operations.

“Equal access to data doesn’t guarantee better decisions. The differentiator is the analytics layer that converts behavior into signals you can defend.”

If “late stress detection” is a priority use case, this is the most direct product tie-in: Cashflow Servicing

____________________________________________________________________________________________________________________________________________________________________________________________________________

Integration Without Replacing Your Stack

One reason lenders stall in the donut hole is fear of deployment complexity. If “better analytics” implies ripping out incumbent decisioning, retraining teams, and rewriting policy infrastructure, it becomes a multi-quarter risk.

Jamie and Kasey emphasize a different approach: integrate adjacent to existing workflows and earn your way into the decision.

If you’re not a decision engine, how do you fit into a lender’s workflow?

Kasey: Typically through an application programming interface (API)-based implementation where outputs can be used inside existing underwriting workflows, decision engines, and review queues. The key isn’t just integration—it’s sequencing.

Jamie: A common pattern is to start in shadow mode—run the analytics alongside the incumbent approach without influencing decisions. Compare outputs, validate behavior, build confidence under governance. Then, if the lender chooses, use the signals more directly and more broadly over time.

____________________________________________________________________________________________________________________________________________________________________________________________________________

That sequencing does two things credit leaders care about:

  • Governance comes first. You can test stability, bias controls, and operational fit without changing approval rates on day one.
  • Value is observable. You can quantify how decisions would have changed before you actually change them.

For a governance-first checklist you can use to structure that rollout: 8 Questions to Ask Before Deploying a Cash Flow Score.

____________________________________________________________________________________________________________________________________________________________________________________________________________

The Economics Are Often In Exposure, Not Just Approval Rate

Many conversations about underwriting focus on the binary decision: approve or decline. But lenders don’t run profit and loss (P&L) on approvals—they run them on loss, margin, and exposure.

Jamie and Kasey repeatedly come back to a point that sounds simple but changes how you evaluate underwriting improvements:

Risk is not a single number. Risk changes with exposure.

Where does cash flow underwriting create economic lift?

Kasey: Lenders talk about growth as “approve more,” but the economics show up in performance and exposure. Cash flow underwriting adds visibility into capacity and behavior that traditional scores can miss.

Jamie: Many lenders put enormous effort into “yes/no,” then treat limit and terms as an afterthought with blunt rules of thumb. But someone can be low-risk up to a certain limit and materially riskier beyond it.

____________________________________________________________________________________________________________________________________________________________________________________________________________

This is where “better analytics” shows up in real underwriting decisions:

  • Not just whether to approve, but how much to approve
  • Not just what the risk is, but what risk looks like at a given exposure
  • Not just how to grow, but how to grow while protecting contribution

If you want a product page that maps directly to “exposure is where the economics are,” see: Credit Offer Engine

____________________________________________________________________________________________________________________________________________________________________________________________________________

Why Categorization Alone Isn’t Cash Flow Underwriting

Transaction data is messy. That’s obvious. What’s less obvious is how often teams stop at the first “clean” representation—merchant categorization, spend buckets, and ratios across time windows.

That can help. But it’s also where many cash flow programs quietly plateau—because labels alone can flatten behavior and hide decision-relevant context.

This idea is expanded in: Data Isn’t Your Edge. Decision Quality Is.. (carringtonlabs.com)

What do lenders get wrong about cash flow underwriting?

Jamie: Teams default to categorization and counting—spend in category X over Y days, repeated across lots of windows. It sounds sophisticated because you end up with huge numbers of variables, but it can miss what matters.

The stronger insight often isn’t the category label. It’s patterns and context—how stable inflows are, how obligations behave, how volatility changes, whether behavior shifts under stress, and what that implies about repayment capacity.

Kasey: Lenders can mistake volume of variables for quality of signal. The question is whether the analytics captures behavior that maps to repayment outcomes in a way your credit team can explain and govern.

____________________________________________________________________________________________________________________________________________________________________________________________________________

What Lenders Actually Use: Decision-Ready Outputs, Not “Model Magic”

The practical question credit teams ask isn’t “is the model good?” It’s “can we use this in production without breaking our controls?”

That comes down to what is returned, how it’s interpreted, and how it supports underwriting operations.

What does a lender actually get back, and how is it used?

Kasey: Lenders call an API and receive structured outputs—like a score or ranking signal, plus explainability (key drivers and directional contributors). The point is to provide decision-ready analytics that a lender can apply within their own policy, thresholds, and exception handling.

Jamie: Explainability has to be usable. It’s not enough to say “the model decided.” Credit teams need to see the why—comfortably enough to support governance and adoption. And real applicants aren’t “all good” or “all bad,” so underwriters need both positive and negative signals to make balanced judgments.

____________________________________________________________________________________________________________________________________________________________________________________________________________

Metrics That Matter In Lending Aren’t Always The “Best” Model Metrics

Most credit organizations have a familiar scorecard of model evaluation metrics. Those metrics aren’t wrong—but they can be incomplete if they don’t map to how lending decisions are actually made.

Which model metrics do you trust—and what can mislead?

Jamie: Area Under the Curve (AUC) can be a decent measure of rank-ordering risk. The issue is they reward discrimination everywhere—even where it doesn’t change a decision.

In lending, the commercially important zone is often around approval and pricing thresholds. A model that looks slightly worse on a generic metric can still be more valuable if it’s more accurate where decisions are made—and if it aligns to the economics of exposure and loss.

Kasey: Don’t let a single metric stand in for decision quality. Tie evaluation to the decisions you’ll actually make and defend.

____________________________________________________________________________________________________________________________________________________________________________________________________________

Key Takeaways
  • Many lending stacks are strong at data access and decision execution but weak in the middle: the credit risk analytics layer.
  • Cash flow underwriting creates economic lift when it improves exposure decisions, not just approvals.
  • Categorization helps, but durable underwriting signals often come from patterns, stability, volatility, and change, not labels alone.
  • “Good model metrics” aren’t enough—evaluate performance around cutoffs and against value-weighted outcomes.
  • Deployment doesn’t need to be disruptive: start in shadow mode, validate under governance, then expand intentionally.