Meta tracking pixel
Aloan
All guides
Guide May 6, 2026 · 11 min read

OCC Bulletin 2025-26: Model Risk Management for Community Banks

The bulletin mattered because it told community banks not to treat every underwriting tool like a full-scale SR 11-7 program. It still matters because that proportionality lens survived the April 2026 rewrite.

Abstract regulatory timeline connecting OCC Bulletin 2025-26 to the April 2026 model-risk update, with teal highlights around proportionality for community banks

Short answer: OCC Bulletin 2025-26 clarified that community banks do not need one rigid, annual, full-scope model-risk routine for every underwriting tool. They need controls commensurate with actual risk, actual business activity, and the complexity and extent of model use.

That clarification was issued on October 6, 2025. The larger rewrite arrived on April 17, 2026, when SR 26-2 and OCC Bulletin 2026-13 superseded the 2011 guidance and narrowed what counts as a model. Bulletin 2025-26 is no longer the whole framework, but it is still the clearest community-bank explainer for proportionality.

For teams evaluating AI underwriting tools, that proportionality lens helps sort the typical vendor demo into three buckets: real model-risk components that need validation, deterministic decision-support software that needs vendor oversight, and third-party workflow that needs governance without a standalone validation program.

If you want the current rulebook first, read the revised model risk management guide. If you want the operating version of this conversation, pair this page with the examiner-readiness guide and the AI-Assisted Underwriting Playbook.

Scannable summary

What did OCC Bulletin 2025-26 really do?

Question Practical answer
Who was it for? Community banks, which the bulletin footnote ties to institutions up to $30 billion in assets.
What was the main clarification? Model-risk practices should be tailored to the bank’s risk exposures, business activities, and the complexity and extent of model use.
Did it require annual validation? No. It says OCC guidance should not be interpreted to require annual model validation for community banks.
Why does it still matter now? Because the April 2026 framework kept the risk-based posture and gave banks a cleaner model-versus-software distinction.

What did OCC Bulletin 2025-26 actually clarify for community banks?

The bulletin was not an AI-specific underwriting memo. It was a clarification about how community banks should apply model-risk guidance in the real world. The OCC said smaller institutions have flexibility to tailor validation frequency and scope, and that exam teams should not criticize a bank solely because it reasonably chose a lighter validation cadence based on risk.

That matters because plenty of community banks had drifted into a heavyweight version of model risk management — discussing a modest vendor tool in a small commercial shop as if it needed the same machinery as a large-bank quantitative model stack. Bulletin 2025-26 pushed back on that and reaffirmed proportionality as the point all along.

Plain-English read: a community bank using a narrow underwriting tool was never supposed to build a giant annual validation ritual just because the word model appeared somewhere in the vendor packet.

Why does the bulletin still matter after the April 2026 rewrite?

Because the April 2026 rewrite both replaced the 2011 framework and narrowed the model definition. Under the new guidance, “model” tracks complex quantitative methods, with simple arithmetic and deterministic rule-based software treated as out of scope. Community banks gained a better classification tool, but still need the proportionality logic from Bulletin 2025-26 to decide how heavy their controls should be once a workflow is classified.

Bucket What it usually means in underwriting Primary control question
Deterministic decision-support software Checklist logic, spreadsheet-style ratios, fixed routing, fixed policy thresholds Does the software behave consistently, and can the bank govern the workflow and vendor risk?
Model-driven analytics Quantitative estimates, probabilistic scoring, pattern-driven risk outputs What validation, monitoring, and change control are proportionate to the actual model risk?
Generative assistance Memo drafting, analyst chat, document Q&A What separate usage, data, and human-review limits does the bank need, given that generative AI is outside the 2026 model-risk guidance scope?

That is why Bulletin 2025-26 still earns a place in the conversation. It no longer sets scope on its own, but it remains the cleanest explanation for why a community bank can classify honestly and stay proportionate instead of overbuilding controls by default.

What are the five takeaways for community banks using AI underwriting tools?

1. Not all AI underwriting tools are models under the guidance

This is the big one. A vendor saying “AI” does not settle the classification question. After April 2026, many workflow components used in spreading, document collection, and ratio rollups may sit outside model guidance if they are deterministic or simple arithmetic. Banks still need to understand them. They just should not call every feature a model by reflex.

2. Vendor products with embedded analytics usually trigger third-party diligence first

Community banks buying underwriting software still owe vendor oversight, data controls, and workflow review. Model-risk validation layers on top where the product contains model-driven quantitative estimates. Generative features sit in a separate governance track because the April 2026 model-risk guidance treats generative AI as outside its scope. That layered framing avoids the two common mistakes: building a full standalone validation package around every embedded analytic, and treating vendor software as if it shifts accountability off the bank.

3. Documentation proportionality still matters

Bulletin 2025-26 is blunt on this point. Validation frequency and scope should be commensurate with risk. A community bank should have an evidence pack that fits the workflow: classification, testing on real files, override logs, version history, and named ownership. It does not need documentation designed for a bank ten times its size.

4. Examiners still expect the bank to understand what the tool does

“The vendor handles that” is still a bad answer. The bank should be able to explain what the tool does, what data it uses, when humans override it, how changes are introduced, and whether the output is deterministic support work or a genuine quantitative estimate. That is the heart of the examiner-readiness workflow.

5. The April 2026 framework reinforces the same risk-based logic

The 2026 update did not kill the proportionality idea. It reinforced it. What changed was the scope language. Community banks now have a better way to separate model-driven features from decision-support software, while keeping the same common-sense principle from 2025-26: the heavier the risk, the heavier the control. If you need the full current translation, go to the revised model risk management guide.

What should a credit team do with Bulletin 2025-26 now?

  1. Classify the stack first. Separate deterministic workflow, model-driven estimates, and generative assistance.
  2. Keep the evidence proportionate. Build enough proof to defend the use case, sized to the actual risk rather than the binder a large-bank governance team would assemble.
  3. Validate on your own real files. Multi-entity 1065s and exception-heavy commercial deals are where workflow weaknesses show up fastest.
  4. Preserve human overrides. Original output, corrected output, reason, user, timestamp.
  5. Use 2025-26 as the proportionality lens, not the whole map. The current framework is the April 2026 guidance plus your bank’s own controls.

That is basically the Aloan position too. The useful underwriting systems are the ones that let a lender show its work. If a bank cannot explain what the software did, what the underwriter changed, and why the file moved forward, the workflow is not ready no matter how slick the demo looked.

Where do community banks still misread the bulletin?

The misreads cluster in three patterns: overreading proportionality as license for light documentation regardless of what the tool does; underreading it and building a validation routine far heavier than the workflow deserves; or letting the vendor collapse software, models, and generative features into a single product story so that every downstream control conversation gets sloppy.

Overreaction

Treating every underwriting workflow as if it needs a full large-bank model governance program.

Underreaction

Using proportionality as an excuse not to classify the workflow, test the output, or preserve overrides.

Vendor fog

Accepting one black-box answer instead of separating fixed software logic from actual quantitative estimation.

The better posture is to use Bulletin 2025-26 for proportionality and the April 2026 framework for scope and definitions, then make the lender-facing workflow legible enough that a chief credit officer and an examiner would describe it the same way after five minutes in the file.

What should an examiner-ready evidence pack include?

Not much theater. Just the pieces that prove the bank understands the workflow and can govern it. For most community-bank AI underwriting use cases, that means a compact set of artifacts rather than a giant policy binder.

  • A classified workflow map. What is deterministic software, what is model-driven, what is generative, and where each part sits in the credit process.
  • Parallel-run results on real files. Not only demo packs, and definitely not only clean 1040s.
  • Override evidence. Original output, corrected output, user, reason, timestamp, and enough context to tell whether the system is improving or drifting.
  • Change-control notes. What changed in the vendor logic, what the bank re-tested, and who approved the update.
  • Named ownership. Someone at the bank who can explain the use case without hiding behind the vendor.

That is the practical bridge from Bulletin 2025-26 to the live 2026 framework: a proportionate evidence pack is sized to the real risk while staying strong enough for a reviewer to reconstruct the logic without guesswork. The playbook and the examiner-readiness guide are the operational companions to that posture.

Frequently asked questions

Did OCC Bulletin 2025-26 require annual model validation for community banks?

No. The bulletin says OCC guidance does not, and should not be interpreted to, require community banks to perform annual model validation. Validation frequency and scope should be tailored to the bank’s risk exposure, business activity, and the complexity and extent of model use.

Is OCC Bulletin 2025-26 still current after April 2026?

It is no longer the main current framework by itself. The April 17, 2026 interagency update issued through SR 26-2 and OCC Bulletin 2026-13 superseded the older 2011 guidance and now sets the live model-risk frame. Bulletin 2025-26 still matters because it remains the clearest community-bank explanation of proportionality.

Does every AI underwriting feature count as a model?

No. The April 2026 guidance narrows the definition to complex quantitative methods and excludes simple arithmetic and deterministic rule-based software. That makes classification the first real governance task.

What should a community bank ask an AI underwriting vendor first?

Ask the vendor to break the product into components: which parts are deterministic workflow software, which parts produce quantitative estimates, and which parts use generative features. Then ask what evidence supports each component and how human overrides are preserved.

How this works in practice: Aloan is built for the parts of underwriting that have to stand up to a reviewer: source-traceable spreading, visible overrides, and a clean line between workflow support and credit authority. To pressure-test that against current guidance, request a demo or read the companion explainer on OCC Bulletin 2026-13 and the revised model risk management guide.

Aloan

See what proportional AI underwriting governance actually looks like

Walk through source traceability, vendor boundaries, override controls, and human decision authority using your actual commercial lending workflow.