
Definitions, measurement notes, pitfalls, and quick examples for modern underwriting, servicing, and model governance.
Important: Exact definitions (default event, delinquency buckets, horizons, thresholds, recovery windows, and denominators) vary by lender, product, and jurisdiction. Many “differences” in reported metrics are definition mismatches.
Definition
Transaction data that is processed and used without relying on personally identifiable information (PII) for analysis and scoring.
In Practice
Enables cash flow analytics that focus on behavior patterns while minimizing unnecessary exposure to customer identifiers.
How It’s Measured
Governed as a data-handling standard: what fields are excluded/transformed, what identifiers are retained only for matching/delivery, and what access controls apply.
Common Pitfall
Assuming “de-identified” means “no risk.” You still need controls around access, retention, audit logging, and permissible use.
Example
A lender evaluates income stability and liquidity stress from transaction feeds without using names, addresses, or free-text descriptors as modeling inputs.
Definition
The mechanism for collecting transaction data with explicit customer permission, plus the operational ability to maintain access over time.
In Practice
Mature cash flow underwriting assumes connectivity will degrade sometimes (expired permissions, account changes, refresh failures) and designs for safe fallbacks.
How It’s Measured
Consent rate, link success rate, refresh success rate, revocation rate, and “days since last refresh.”
Common Pitfall
Treating connectivity as a one-time event. “Permissioned data” is a lifecycle dependency that must be monitored.
Example
Transaction-derived capacity measures are only used when consent is active and data freshness meets policy.
Definition
How complete the transaction view is (accounts included, history length, refresh consistency, and gaps).
In Practice
Coverage determines how confidently you can interpret cash flow signals and whether a workflow should step up verification.
How It’s Measured
Percent of accounts linked, months of history available, missing-day rates, and share of applicants meeting minimum coverage thresholds.
Common Pitfall
Using a single “coverage” number and ignoring why coverage is missing (new account vs failed refresh vs partial institution view).
Example
Two applicants both have 90 days of data, but one has consistent refreshes across primary accounts while the other has intermittent gaps.
Definition
How recent the most recent transaction data is relative to the time a decision or monitoring check is performed.
In Practice
Freshness is a control variable for both underwriting and servicing signals, especially for rapidly changing income or liquidity conditions.
How It’s Measured
Time since last successful refresh; time since last posted transaction; completeness of the most recent window.
Common Pitfall
Assuming “recent” is good enough without defining a freshness threshold tied to the decision.
Example
A monitoring alert is suppressed if the most recent refresh is older than the lender’s monitoring policy.
Definition
A principle that decisions and validations should use only the information that would have been available at the time of the decision.
In Practice
Critical for auditability, clean backtesting, and governance—especially when data refreshes after the fact.
How It’s Measured
Timestamped snapshots of inputs/outputs, or reproducible “as-of” reconstruction rules.
Common Pitfall
Backtesting with “today’s cleaned dataset,” which can unintentionally include later transactions and inflate results.
Example
A January 15 application is evaluated using transactions available as of January 15, not transactions that posted on January 20.
Definition
The ability to trace how a signal was derived (lineage) and which definition/version was used at the time (versioning).
In Practice
Supports model governance, change control, troubleshooting, and consistent monitoring over time.
How It’s Measured
Version identifiers for features and models, documented transformations, test results, and reproducibility checks.
Common Pitfall
Silent changes (new filters, new definitions) that shift distributions and break comparability.
Example
A lender can show which “income stability” definition was used for a decision and when that definition last changed.
Definition
Probability an account defaults within a defined time horizon.
In Practice
Estimated using score/model outputs and validated against observed outcomes over the same horizon.
How It’s Measured
Observed PD ≈ defaults within T / total accounts (for the defined default event and horizon).
Common Pitfall
PDs aren’t comparable unless horizon and default definition match.
Example
60 of 1,000 hit 90+ DPD in 12 months → observed 12-month PD ≈ 6%.
Definition
Percent of exposure lost after recoveries and collection costs, conditional on default.
In Practice
Estimated from recovery curves and cost-to-collect by segment.
How It’s Measured
LGD = (EAD − Recoveries + Direct Costs) / EAD (cost conventions vary by lender).
Common Pitfall
Ignoring recovery timing distorts economics and comparisons.
Example
EAD $1,000, recoveries $500, costs $50 → LGD = 55%.
Definition
Expected outstanding exposure when default occurs, including accrued amounts per policy.
In Practice
For revolving lines, includes expected drawdown before default.
How It’s Measured
EAD = Balance + CCF × Undrawn Amount.
Common Pitfall
Using current balance underestimates exposure on revolving products.
Example
Balance $800, undrawn $400, CCF 50% → EAD = $1,000.
Definition
Expected credit loss over a defined horizon, combining risk and exposure.
In Practice
Used to size limits, inform pricing, and forecast portfolio losses.
How It’s Measured
EL ($) = PD × LGD × EAD.
Common Pitfall
Mixing horizons across PD, LGD, and EAD misleads.
Example
PD 6%, LGD 55%, EAD $1,000 → EL = $33.
Definition
Percent of defaulted exposure recovered within a defined recovery window.
In Practice
Tracked by cohort to assess collections effectiveness and inform LGD.
How It’s Measured
Recovery Rate = Recoveries / EAD (within the stated window and cost treatment).
Common Pitfall
Comparisons fail without aligned recovery windows and cost treatment.
Example
Recover $450 on EAD $1,000 over 12 months → 45%.
Definition
Pricing that reflects expected loss, costs, and required return.
In Practice
Supports offer strategy by aligning price and limit to risk and unit economics.
How It’s Measured (Illustrative)
Rate ≈ Funding + Ops + EL Rate + Capital + Margin.
Common Pitfall
Pricing to PD alone ignores severity, exposure, and servicing costs.
Example
6% + 2% + 3% + 4% → ~15% target rate (illustrative).
Definition
Per-loan profitability after funding, operating costs, and credit losses (often contribution margin).
In Practice
Measured by cohort to confirm sustainable growth by segment.
How It’s Measured
Contribution = Revenue − Funding − Ops − Loss (define inclusion of acquisition/servicing).
Common Pitfall
Ignoring acquisition or servicing overstates profitability.
Example
$120 − $30 − $20 − $25 → $45 contribution.
Definition
Balance written off as uncollectible after prolonged delinquency, per policy.
In Practice
Policy timing affects reported losses and recovery tracking.
How It’s Measured
Charge-Off Rate = Charge-Offs / average receivables (or average balance), per period.
Common Pitfall
Different charge-off policies make comparisons misleading.
Example
$200k charge-offs on $10m average receivables → 2%.
Definition
Losses after recoveries, expressed as a rate.
In Practice
Used to compare portfolio performance across cohorts and periods.
How It’s Measured
Net Charge-Off Rate = (Charge-Offs − Recoveries) / average receivables (or balance).
Common Pitfall
Denominator choice changes interpretation—state it.
Example
($100k − $40k) / $5m → 1.2%.
Definition
Cohort view showing delinquency or loss progression over time.
In Practice
Highlights seasoning effects and macro impacts on new cohorts.
How It’s Measured
Metric plotted by months-on-book (MOB) per cohort.
Common Pitfall
Policy changes distort curves if not segmented.
Example
2025 Q1 cohort peaks at 3% net charge-off by MOB 9.
Definition
Underwriting using transaction-level income, spend, and balance patterns.
In Practice
Focuses on stability, liquidity, and capacity across rolling windows.
How It’s Measured
Signals calculated over multiple windows (e.g., 30/60/90 days) with trend comparisons.
Common Pitfall
Short windows can misread seasonality and pay-cycle shifts; long windows can miss recent shocks.
Example
Stable pay cadence and low liquidity stress support higher confidence in capacity.
Definition
Variation in income amount and timing across a time window.
In Practice
Derived from qualifying deposits to infer pay regularity and shocks.
How It’s Measured (Illustrative)
CV = σ / μ of income deposits.
Common Pitfall
Misclassifying transfers as income inflates stability.
Example
Mean $1,000, σ $200 → CV = 0.20.
Definition
Degree of day-to-day balance fluctuation within a time window.
In Practice
Signals resilience: buffers versus frequent drawdowns to low balances.
How It’s Measured (Illustrative)
σ(balance) or (p95 − p5) of daily balances.
Common Pitfall
Single-account balance can hide liquidity held elsewhere.
Example
p95 $900, p5 $50 → spread = $850.
Definition
Number of days current cash can cover typical outflows.
In Practice
Used as a liquidity cushion signal in underwriting and monitoring.
How It’s Measured (Illustrative)
Buffer Days = available cash / average daily outflows.
Common Pitfall
Outflow averages distort when unusual bills are included.
Example
$600 cash, $120/day outflows → 5 buffer days.
Definition
Signals of shortfall such as NSFs, overdrafts, or repeated near-zero balance days.
In Practice
Monitored for emerging risk; spikes often precede delinquency.
How It’s Measured
Counts and persistence of stress events per window.
Common Pitfall
Fee timing can create false “negative day” counts.
Example
3 overdrafts in 30 days → elevated liquidity stress.
Definition
Computing the same signal across timeframes to capture level and trend.
In Practice
Compare recent (e.g., 30 days) against baseline (e.g., 90 days) for shifts.
How It’s Measured
Repeat metrics over multiple windows and compare deltas.
Common Pitfall
Changing window definitions breaks comparability across periods.
Example
Income CV rises from 0.15 (90d) to 0.35 (30d).
Definition
Estimated repayment headroom after core expenses and obligations are covered.
In Practice
Used to size offers and monitor stress over time.
How It’s Measured (Conceptual)
Capacity = Income − Core Outflows − Debt Payments (definitions vary).
Common Pitfall
Misclassification can materially overstate capacity.
Example
$4,000 − $2,600 − $900 → $500 monthly capacity.
Definition
Expenses divided by income over a defined period window.
In Practice
Higher ratios signal tighter budgets and lower shock absorption.
How It’s Measured
ETI = Expenses / Income.
Common Pitfall
One-off bills spike ratios; smoothing and minimum coverage rules help.
Example
$3,200 / $4,000 → ETI = 0.80.
Definition
The lender-owned system that turns inputs (scores, attributes, policy rules) into actions (approve/decline/refer, price, limit, tenor) and produces required communications.
In Practice
A risk analytics layer feeds decision-ready signals into the decision engine; policy ownership stays with the lender.
How It’s Measured
Decision latency, decision consistency, reason code coverage, and traceability (inputs → rules → action).
Common Pitfall
Treating analytics outputs as “the decision” rather than governed inputs.
Example
A score enters a ruleset that sets verification, limit, and pricing based on lender thresholds and constraints.
Definition
The documented mapping from model outputs to lending strategy: what actions are taken at each band and under what constraints.
In Practice
Connects risk and capacity insights to approvals, exposure, and unit economics.
How It’s Measured
A governed decision table/policy matrix with version control and performance monitoring by band.
Common Pitfall
Frequent tuning without governance creates drift that can’t be explained or defended.
Example
Score bands define: approve with $X; approve with verification; refer; decline.
Definition
Human-readable factors that summarize what is contributing most to a score or risk/capacity assessment.
In Practice
Used for governance review, operational interpretation, and (where applicable) communications.
How It’s Measured
Ranked driver categories with directionality (e.g., “income instability increased risk”).
Common Pitfall
Drivers that don’t map to policy become hard to act on.
Example
Top drivers include income regularity, buffer behavior, and frequency of liquidity stress events.
Definition
Codified explanations that map signals to standardized rationale categories for transparency and governance.
In Practice
Helps align analytics outputs to lender policy and decision documentation.
How It’s Measured
Coverage rate, stability over time, and consistency with decision outcomes.
Common Pitfall
Using generic labels that aren’t governed or testable.
Example
A decline includes reason codes indicating insufficient buffer and elevated recent liquidity stress.
Definition
Customer-facing notification requirements when credit is denied or offered on materially less favorable terms, depending on jurisdiction and product.
In Practice
Signals used in decisions should be explainable and align to the lender’s reason framework.
How It’s Measured
Timeliness, completeness, consistency, and auditability from decision inputs to notice content.
Common Pitfall
Using signals that improve performance but cannot be consistently explained or governed downstream.
Example
Driver categories map to a lender’s reason taxonomy used in customer notices and internal QA.
Definition
Days a scheduled payment is past due.
In Practice
Used to bucket delinquency and trigger servicing workflows.
How It’s Measured
DPD = max(0, Today − Due Date) (account for grace periods per policy).
Common Pitfall
Different schedules make comparisons misleading across products.
Example
Due Jan 1, today Jan 10 → DPD = 9.
Definition
Monitoring repayment capacity using post-origination transaction behavior.
In Practice
Tracks capacity, liquidity stress, and stability shifts between payments.
How It’s Measured
Rolling-window changes versus baseline, with persistence rules and escalation tiers.
Common Pitfall
Alert overload without tuning reduces operational usefulness.
Example
Income down 20% and buffer days halve → flag for review.
Definition
Leading metric that signals rising delinquency or default risk.
In Practice
Built from persistent shifts in key cash flow signals, not single-point anomalies.
How It’s Measured
Thresholded deltas versus baseline plus persistence (e.g., N consecutive windows).
Common Pitfall
Single-signal alerts create noise without persistence checks.
Example
Two consecutive weeks of negative-balance days triggers an alert.
Definition
Adverse change in cash flow behavior relative to prior baseline.
In Practice
Captures trend breaks: income down, stress events up, buffers erode.
How It’s Measured
Δmetric = recent window − baseline window (or ratio change), evaluated for persistence.
Common Pitfall
One-off shocks should be smoothed to avoid overreaction.
Example
NSFs move from 0 to 2 month-on-month → deterioration.
Definition
Defined thresholds that prompt review, monitoring, or watchlist inclusion.
In Practice
Rules combine severity, persistence, and trend direction by segment.
How It’s Measured
Trigger when metric crosses X for N days/weeks.
Common Pitfall
Static thresholds drift as macro conditions change.
Example
Buffer days < 2 for 14 days triggers review.
Definition
Share of accounts moving to worse delinquency buckets over time.
In Practice
Helps forecast near-term loss and evaluate intervention effectiveness.
How It’s Measured
Roll 30→60 = Count(30→60) / Count(30) (define observation window).
Common Pitfall
Small bucket sizes make roll rates jumpy.
Example
20 of 200 in 30 DPD roll to 60 → 10%.
Definition
Share of delinquent accounts returning to current within a window.
In Practice
Core collections KPI; higher cures typically reduce LGD.
How It’s Measured
Cure = Count(delinquent→current) / Count(delinquent).
Common Pitfall
Counting partial payments as cures overstates improvement.
Example
90 of 300 at 30 DPD cure → 30%.
Definition
How often you refresh cash flow signals or risk scores after origination.
In Practice
Cadence depends on product type, portfolio volatility, and operational capacity.
How It’s Measured
Refresh frequency, percent successfully rescored, and impact on early warning precision.
Common Pitfall
Inconsistent cadence across segments creates monitoring blind spots.
Example
Monthly rescoring for revolving lines, with more frequent checks for watchlist accounts.
Definition
Adjusting exposure post-origination (e.g., line/limit/terms) based on updated risk and capacity signals.
In Practice
A lever for managing loss and capital efficiency without waiting for delinquency.
How It’s Measured
Limit changes by segment, subsequent utilization, roll rates, cure rates, and net loss impact.
Common Pitfall
Late or blunt actions can harm resilient customers or miss the intervention window.
Example
Reduce exposure for accounts with persistent buffer erosion while leaving stable accounts unchanged.
Definition
Operational actions designed to prevent escalation when customers show signs of stress, consistent with lender policy.
In Practice
Triggered by payment performance plus leading indicators, with clear escalation rules.
How It’s Measured
Take-up rates, cure rates, re-default rates, and outcomes versus comparable cohorts.
Common Pitfall
Over-triggering creates noise and cost; under-triggering misses the window to help.
Example
Early warning flags prompt outreach that offers a temporary plan before a missed payment occurs.
Definition
A recommended operational step tied to a risk or capacity change, intended to support consistent servicing decisions.
In Practice
Most useful when constrained by policy (allowed actions) and tuned for workload.
How It’s Measured
Action acceptance rate, time-to-action, and downstream outcomes (cure/roll/loss).
Common Pitfall
Recommendations that ignore constraints get ignored and lose credibility.
Example
Route to outreach queue for persistent buffer erosion; keep monitoring for mild, non-persistent changes.
Definition
How well a model ranks bad outcomes above good ones.
In Practice
Tracked by cohort to confirm discrimination holds over time.
How It’s Measured
AUC = P(score_bad > score_good).
Common Pitfall
High AUC can hide poor probability calibration.
Example
AUC = 0.78 indicates strong ranking separation.
Definition
Testing predictions against realized outcomes on historical cohorts.
In Practice
Run by vintage and segment to validate stability and generalization.
How It’s Measured
Compare predicted versus observed by decile/bin and cohort.
Common Pitfall
Data leakage inflates backtest performance unrealistically.
Example
Predicted 12% vs observed 11% in top risk decile.
Definition
Checking model performance parity across groups and key segments consistent with legal/regulatory requirements.
In Practice
Compare ranking and error rates to surface disparate impact risk.
How It’s Measured
Differences in AUC and error rates (TPR/FPR) across groups.
Common Pitfall
Small samples can produce noisy conclusions.
Example
AUC 0.75 vs 0.74 across groups suggests similar ranking performance (sample size still matters).
Definition
How closely predicted probabilities match observed default rates.
In Practice
Reviewed by bin and cohort; recalibrate when drift emerges.
How It’s Measured
Observed rate ≈ mean(predicted PD) per bin.
Common Pitfall
Averaging across segments masks miscalibration within segments.
Example
Bin PD 5.0% vs observed 5.2% → close match.
Definition
A stability metric that measures how much a distribution has shifted between two samples (often used for input/score drift).
In Practice
Used to flag meaningful distribution change that warrants investigation.
How It’s Measured
Compare bin proportions baseline vs current; sum weighted log differences.
Common Pitfall
PSI signals shift, not necessarily performance impact—pair with outcome metrics.
Example
PSI 0.30 suggests meaningful shift (thresholds vary by lender).
Definition
Running alternative models or strategies in parallel to select the best performer.
In Practice
Compare discrimination, calibration, stability, and business KPIs together.
How It’s Measured
Side-by-side KPI set across cohorts/segments.
Common Pitfall
Choosing solely on AUC can worsen loss or margin.
Example
Challenger reduces expected loss at similar approval rate (validate across cohorts).
Definition
Model effectiveness changing as data, behavior, or conditions shift.
In Practice
Monitored via feature stability and outcome performance by month.
How It’s Measured
PSI plus changes in AUC/calibration and observed outcomes over time.
Common Pitfall
Ignoring drift can quietly increase losses or change who gets approved.
Example
PSI 0.30 and AUC −0.05 signals meaningful drift.
Definition
Scoring in production without affecting decisions to validate safely.
In Practice
Collect outcomes, compare to current approach, then decide rollout.
How It’s Measured
Run silently; evaluate stability, lift, AUC, and calibration once outcomes mature.
Common Pitfall
Short shadow periods miss longer-cycle outcomes.
Example
Shadow for a defined period, then enable for a controlled slice with monitoring.
Definition
A modeling constraint that enforces directional relationships when governance or policy requires it.
In Practice
Can improve defensibility and reduce counterintuitive outcomes.
How It’s Measured
Constraint compliance checks and stability review across cohorts/segments.
Common Pitfall
Over-constraining can reduce predictive power.
Example
Higher sustained buffer should not increase modeled risk, all else equal (subject to lender definitions).
Definition
Testing on later time periods than training to validate stability under changing conditions.
In Practice
Especially important when macro and borrower behavior shift.
How It’s Measured
Compare AUC and calibration on out-of-time holdout cohorts.
Common Pitfall
Random splits can look strong while failing to reflect real deployment risk.
Example
Train on 2022–2023 cohorts; validate on 2024 cohorts; monitor 2025 vintages.
Definition
Controlled deployment phases (shadow → limited production slice → gradual expansion) with predefined guardrails.
In Practice
Reduces downside risk while evidence accumulates.
How It’s Measured
KPIs monitored against thresholds; explicit expand/stop rules by cohort and segment.
Common Pitfall
Deploying without stop conditions turns “testing” into unmanaged exposure.
Example
Enable for a narrow segment with exposure caps and rollback triggers if calibration deteriorates.
Definition
A documented record of model and strategy changes, including rationale, approvals, and impact assessment.
In Practice
Makes change visible and auditable; reduces governance friction and operational surprises.
How It’s Measured
Completeness (what changed/why/who approved), linkage to monitoring, and retrievability for audit.
Common Pitfall
“Minor tweaks” that aren’t logged create unexplained drift.
Example
A log entry records a new feature version, validation results, expected impact, approvals, and rollout plan.
Definition
Transaction-based risk score summarizing cash flow stability and capacity signals, delivered with explainable drivers to support lender decisioning.
In Practice
Used as a governed input to underwriting and account management strategies; lenders retain policy control.
How It’s Measured
Score output plus associated drivers; monitored for coverage, stability, and performance over time.
Common Pitfall
Treating the score as a decision instead of decision support.
Example
A score plus key drivers supports consistent treatment near approval boundaries.
Definition
A probability model estimating a defined bad-outcome risk from behavioral attributes, validated by cohort and monitored for drift.
In Practice
Configured to lender outcome definitions and review cadence.
How It’s Measured
Predicted PD aligned to the defined horizon and default event, with calibration checks.
Common Pitfall
Changing outcome definitions without retraining breaks comparability.
Example
Outputs PD for a 12-month 90+ DPD outcome definition.
Definition
An analytics layer that supports offer guidance by quantifying risk and return trade-offs under constraints.
In Practice
Used to test scenarios across segments and support strategy and policy tuning.
How It’s Measured
Scenario KPIs versus baseline (approvals, expected loss, margin), subject to constraints.
Common Pitfall
Optimizing without constraints produces impractical recommendations.
Example
Recommends a limit and price combination that meets margin targets within expected loss guardrails.
Definition
Snapshot metrics of income, expenses, obligations, and stability derived from transactions, designed to support review workflows.
In Practice
Adds context for thin-file borrowers using transaction-derived indicators.
How It’s Measured
A metric set across rolling windows with trend deltas, plus coverage/freshness qualifiers.
Common Pitfall
Assuming a single-account view is complete when coverage is partial.
Example
Shows ETI 0.80 and buffer days 2, indicating constrained liquidity.
Definition
Monitoring outputs that flag emerging post-origination risk using transaction behavior.
In Practice
Flags capacity erosion, liquidity stress, and behavioral deterioration earlier, supporting prioritization and intervention.
How It’s Measured
Rolling-window deltas and thresholded alert rules with persistence.
Common Pitfall
Alert overload without tuning reduces operational usefulness.
Example
Alert: buffer days down 60% over 30 days with persistent liquidity stress.
Note: This glossary is for informational purposes only. Definitions, formulas, and thresholds vary by lender, product, and jurisdiction.