Meta tracking pixel
Aloan
Back to Blog
Industry Insights April 20, 2026 · 12 min read · By Gerrit Yntema

What OCC Bulletin 2026-13 Means for Community Banks Using AI in Underwriting

The bulletin is not a new AI rulebook. It is a reset on what counts as a model, what does not, and how much discipline community banks still need when vendors say "AI underwriting."

Annotated regulatory bulletin with highlighted scope, validation, and vendor oversight callouts

Short answer: OCC Bulletin 2026-13 does not create AI-specific underwriting rules for community banks. It updates interagency model risk management guidance, narrows what counts as a model, says generative AI and agentic AI are out of scope, and repeats that the framework is risk-based rather than prescriptive.

Community banks should still care because the narrower definition gives credit teams a cleaner way to evaluate vendors. If a product touches spreading, ratio calculation, risk flagging, or memo support, your team now needs to separate deterministic workflow software from statistical or quantitative models, then govern each one honestly. That is the immediate practical issue for any institution using or buying AI in underwriting this quarter.

The deeper regulatory backdrop still sits in our guide on examiner readiness for AI lending and the broader AI-Assisted Underwriting Playbook. This post focuses on the immediate practical question: what changed, what did not, and what a credit team should do next.

Scannable summary

What does Bulletin 2026-13 really change?

Topic Practical meaning for community banks
Model definition The 2026 guidance narrows scope to complex quantitative methods and excludes simple arithmetic and deterministic rule-based software.
Generative and agentic AI Explicitly out of scope here. That is not a free pass. It means do not pretend this bulletin resolved those governance questions.
Community bank posture The guidance is expected to matter most above $30 billion, but smaller banks still need proportional controls when model risk is real.
Vendor diligence Banks still own validation, monitoring, and use limits for vendor products that qualify as models.

What changed from the older guidance?

The headline change is not "the OCC issued AI rules." The headline change is that the agencies replaced the old model risk package. SR 26-2 says the revised guidance supersedes SR 11-7 and SR 21-8. The OCC bulletin also rescinds the old Comptroller's Handbook booklet and prior model issuances tied to credit scoring and BSA/AML.

More important for underwriting teams, the definition of a model got tighter. The 2011 guidance defined a model as a quantitative method, system, or approach applying statistical, economic, financial, or mathematical theories, techniques, and assumptions. The revised 2026 guidance defines a model as a complex quantitative method, system, or approach applying statistical, economic, or financial theories to produce quantitative estimates. It also explicitly excludes simple arithmetic calculations and deterministic rule-based processes or software.

Older posture 2026 clarification
Broader 2011 definition of model scope Narrower definition centered on complex quantitative methods
Banks often treated validation cadence as closer to a fixed annual ritual Risk-based, tailored approach reinforced, especially when read with OCC Bulletin 2025-26
Workflow automation and model logic were easier to blur together in vendor pitches Banks have stronger footing to ask which parts are deterministic software and which parts are actual models

That matters because a lot of underwriting vendors sell one bundle with several different control problems inside it. Document routing may be deterministic. Spreading and risk scoring may be model-driven. Memo drafting may use generative features. The revised guidance gives banks less room to blur those categories together.

Models versus deterministic software is now a credit issue, not a semantic one

If a vendor has hard-coded policy thresholds, spreadsheet-style calculations, and deterministic routing logic, the 2026 guidance gives you room to call that what it is. It may be important software. It may create operational risk. But if there are no statistical, economic, or financial theories underpinning it, the revised guidance says it is not a model for this purpose.

On the other hand, if the product estimates borrower performance, predicts risk, scores document quality probabilistically, or generates quantitative outputs from trained statistical methods, you are back in model risk territory. That is where validation, monitoring, use limits, and outcome analysis belong.

Probably not a model under the revised guidance

Deterministic document checklists, arithmetic ratio rollups, fixed policy thresholds, and software rules that always do the same thing with the same inputs.

Likely still a model

Probabilistic scoring, quantitative estimation, pattern-based exception detection, and any underwriting component driven by statistical or financial modeling rather than fixed rules.

The important point is the bulletin's explicit statement that generative AI and agentic AI are out of scope. Do not misread that as safe harbor. It just means this guidance is not the finished framework for those tools. If a vendor is using generative features for memo drafting or analyst assistance, your bank still needs usage rules, data controls, and human review. We covered that operational side in what examiners actually ask about AI in lending and in our post on AI data security in commercial lending.

Do community banks under $30 billion still need to care?

The revised guidance says it is expected to be most relevant to banking organizations with over $30 billion in assets. That line will tempt some smaller institutions to shrug and move on. Bad idea.

The same guidance says it may still be relevant to banks at $30 billion or less when model risk exposure is significant because of the prevalence or complexity of models, or because the bank is doing things outside traditional community banking. That means some AI-assisted underwriting workflows can still fall back into the conversation, especially if the bank is using multiple vendor components across spreading, exception flagging, and memo support.

Read together with OCC Bulletin 2025-26, the message is straightforward. Community banks do not need a large-bank validation program for every workflow tool. They also do not get to outsource judgment to vendor packaging. You still need a proportional control stack: internal ownership, clear classification of the tool, evidence that vendor outputs work on your files, and a record of what humans changed and why.

Practical read: Bulletin 2026-13 gives community banks a cleaner way to separate automation from models. It also raises the bar on asking what is actually inside an "AI underwriting" product.

What does this mean for vendor evaluation right now?

The revised guidance keeps one point intact: vendor products do not move accountability off the bank. The 2011 guidance already said validation applies equally to vendor-developed models. The 2026 guidance keeps that principle and says sound practice includes understanding conceptual soundness, design, development data, performance, ongoing monitoring, and outcome analysis for vendor models.

So the vendor diligence script should get tighter, not looser:

  1. Make the vendor map the product. Which features are deterministic software, which are quantitative models, and which use generative features?
  2. Ask for model-specific evidence only where model logic actually exists. Do not waste time demanding model validation decks for fixed arithmetic workflows. Do demand them for scoring, estimation, and statistical outputs.
  3. Validate on your own document mix. A vendor packet is input, not closure. Your 1065-heavy, multi-entity commercial files are the real test.
  4. Force change-control clarity. If a model version changes, your team should know what changed, what was re-tested, and what monitoring follows.
  5. Keep human decision authority visible. Underwriters still own corrections, interpretations, and recommendations. The record should preserve the original machine output and the human override.

This is exactly why banks evaluating financial spreading software, tax return analysis workflows, or a full commercial loan underwriting platform should insist on a feature-by-feature governance conversation instead of one demo-level answer.

Quarterly action list

A practical 5-point checklist for credit teams reviewing AI vendors this quarter

1. Classify the workflow before you bless it

Break the product into document intake, extraction, quantitative analysis, risk flags, memo support, and any generative features. Do not govern the whole thing as one black box.

2. Tie each model component to a named internal owner

If a function qualifies as a model, someone at the bank owns the use case, the limits, and the monitoring. Not the vendor. Not procurement. Someone real.

3. Run parallel validation on real files

Use your own tax returns, entity structures, and analyst edge cases. Track override frequency and where the tool breaks. That is the validation record you will actually trust later.

4. Separate generative features from model-risk claims

If memo drafting or analyst chat is generative, keep separate usage rules, data restrictions, and review standards around it. Bulletin 2026-13 did not solve that control problem for you.

5. Make sure the file shows its work

Source citations, override history, policy thresholds, approval steps, and version history should all survive inside the workflow. If you cannot reconstruct the file live, the controls are weaker than the demo suggested.

The takeaway

OCC Bulletin 2026-13 is useful precisely because it is narrower than the market noise around it. It does not tell community banks to panic about AI. It tells them to stop being sloppy about classification, validation, and vendor oversight.

If your underwriting workflow is truly AI-assisted, the right posture is still the boring one. Humans own the credit decision. Models get validated and monitored where model logic exists. Deterministic software gets governed as software. Generative features get separate constraints because this bulletin leaves them out of scope.

That is also the frame we use at Aloan. If a bank cannot explain what the machine did, what the underwriter changed, and why the file moved forward, it is not ready. If you want to pressure-test that standard against your own workflow, start with the demo or the playbook.

FAQ: OCC Bulletin 2026-13 and AI underwriting

Does OCC Bulletin 2026-13 create new AI underwriting rules for community banks?

No. The bulletin updates model risk management guidance and makes clear the framework is risk-based, not prescriptive. It does not create AI-specific underwriting rules, and it explicitly says generative AI and agentic AI are out of scope.

Do community banks under $30 billion need to care about OCC Bulletin 2026-13?

Yes. The guidance says it is expected to be most relevant to banking organizations over $30 billion, but it can still matter for smaller institutions with significant model risk exposure because of model prevalence, complexity, or nontraditional activity. Community banks also still need the proportionality principles from OCC Bulletin 2025-26.

What counts as a model under the revised 2026 guidance?

The guidance defines a model as a complex quantitative method, system, or approach that applies statistical, economic, or financial theories to produce quantitative estimates. It excludes simple arithmetic calculations and deterministic rule-based processes or software that do not rely on those theories.

What should banks ask AI underwriting vendors after Bulletin 2026-13?

Ask which parts of the product are deterministic workflows, which parts are statistical or quantitative models, and which parts use generative features. Then ask for validation evidence, model change controls, outcome monitoring, and a clear record of human review and override history.

Going deeper? Read the full examiner readiness guide, the broader AI-Assisted Underwriting Playbook, the companion guide on automating commercial loan policy compliance, or the future of commercial underwriting technology for where examiner expectations are heading next.

Aloan

See how governed AI underwriting should actually work

Walk through source traceability, override controls, vendor boundaries, and human decision authority using your actual commercial lending workflow.