U.S. bank model risk guidance changed on April 17, 2026. SR 26-2 superseded SR 11-7, and OCC Bulletin 2026-13 rescinded OCC Bulletin 2011-12. For community-bank credit teams using AI underwriting tools, the immediate questions are practical: what now counts as a model, what does not, and what controls still apply.
The substantive change is the model definition itself, which is narrower and more explicit. The revised guidance describes a model as a complex quantitative method that applies statistical, economic, or financial theories to produce quantitative estimates, and treats simple arithmetic and deterministic rule-based software as outside the framework. That gives community banks a cleaner way to sort an AI underwriting stack into three buckets: deterministic workflow software, model-driven components, and generative features that sit outside the bulletin entirely.
That does not mean smaller banks can ignore the update because the guidance says it is expected to be most relevant above $30 billion in assets. The same bulletin says it may still matter below that threshold when model risk exposure is significant because of model prevalence, complexity, or activity outside traditional community banking. If your bank is relying on a vendor to spread tax returns, flag risk, estimate cash flow, or shape memo support, you still need an honest classification and control story.
This guide explains that shift for community-bank credit teams. For the broader rollout context, pair it with the AI-Assisted Underwriting Playbook, the deeper examiner readiness guide, and the operational view in AI underwriting use cases already in production.
Scannable summary
What changed, in plain English?
| Question | Current answer |
|---|---|
| What replaced SR 11-7? | The revised interagency guidance issued through SR 26-2 and OCC Bulletin 2026-13. |
| What counts as a model now? | A complex quantitative method that applies statistical, economic, or financial theories to produce quantitative estimates. |
| What is outside this guidance? | Simple arithmetic, deterministic rule-based software, and generative or agentic AI. |
| What still matters for community banks? | Correct classification, proportional validation, ongoing monitoring, and vendor oversight for the parts of the workflow that actually function as models. |
What replaced SR 11-7 for AI underwriting teams?
The current interagency guidance is the revised package issued on April 17, 2026. The Federal Reserve circulated it through SR 26-2. The OCC issued the same revision through Bulletin 2026-13 and, in the same bulletin, rescinded OCC Bulletin 2011-12 along with several earlier model-risk issuances. The replacement is the 2026 guidance itself, not a refreshed version of SR 11-7.
The core disciplines — validation, ongoing monitoring, governance, and vendor oversight — all carry forward. What changed is the framing. The guidance is more openly risk-based, more explicit about what it covers, and less willing to let every software workflow get described as a model by default.
That matters in commercial lending because AI underwriting stacks are messy. A vendor demo might bundle tax return spreading, exception flags, policy checks, memo drafting, and analyst chat in one product story. Under the revised guidance, those functions do not all create the same model-risk question.
What counts as a model versus deterministic software?
This is the most useful part of the revision for community banks evaluating AI underwriting tools. The bulletin defines a model as a complex quantitative method, system, or approach that applies statistical, economic, or financial theories to process input data into quantitative estimates. Then it explicitly excludes simple arithmetic calculations and deterministic rule-based processes or software.
Probably not a model under this guidance
- Fixed checklist logic for missing documents
- Spreadsheet-style ratio math using known inputs
- Deterministic policy thresholds that fire the same way every time
- Workflow routing that does not rely on statistical estimation
Likely still a model
- Probabilistic risk scoring
- Quantitative estimation of borrower performance or loss behavior
- Pattern-based model outputs that depend on trained statistical logic
- Vendor underwriting components producing estimates rather than fixed calculations
The relevant line runs through whether the feature is doing complex quantitative estimation or only deterministic workflow work, not whether the vendor calls it AI. A spreading tool that maps numbers into a template and rolls them into a fixed DSCR formula is a different control problem from a feature that predicts cash-flow performance or assigns a probability-style risk score.
That distinction is why banks evaluating financial spreading software should make the vendor break the product into components. If the vendor cannot say which parts are deterministic and which parts are model-driven, the bank cannot govern the workflow honestly.
Why should community banks care if the guidance is aimed mostly above $30 billion?
Because the bulletin does two things at once. First, it says the guidance is expected to be most relevant to banking organizations with more than $30 billion in assets. Second, it says the same framework may still be relevant below that threshold when model risk exposure is significant because of model prevalence, model complexity, or activity outside traditional community banking.
That is not abstract. A community bank may not run a giant proprietary credit model, but it can still depend on a vendor stack that affects document interpretation, exception handling, ratio outputs, or memo support across a large share of commercial production. Once that happens, the bank still needs to know which functions are models, how they were validated for the bank’s actual files, and what ongoing monitoring will catch drift.
The proportionality piece still comes from OCC Bulletin 2025-26. That bulletin says community banks can tailor validation frequency and scope to risk, and OCC guidance should not be interpreted to require annual model validation. So the revised 2026 guidance tells you what belongs inside the model-risk frame. Bulletin 2025-26 still helps tell you how heavy the community-bank control posture should be.
What are the validation and monitoring expectations now?
The revised guidance still covers model development and use, validation and monitoring, and governance and controls, but it does not hand community banks a fixed annual cadence. The useful question is what model risk the workflow actually carries, and what evidence is enough to defend its intended use.
For an AI underwriting workflow, a reasonable community-bank practice is to validate against the bank's own files, keep an override trail, re-test after material vendor changes, and review whether the feature still behaves as expected on the tax returns, spreads, and exception cases the commercial team actually sees. The result is a risk-based control set sized to the workflow, rather than a cadence-driven exercise.
Reasonable community-bank evidence pack
- A classified workflow map showing what is deterministic, what is model-driven, and what is generative
- Parallel-run results on recent commercial files
- Override logs showing what humans changed and why
- Release notes and re-test results after vendor model changes
- Named ownership for ongoing monitoring and escalation
For the operational version of that control stack — the workpapers and artifact checklist an exam team will actually want to see — read the examiner readiness guide.
What does the guidance say about vendor and third-party products?
The revised guidance still discusses considerations related to vendor and other third-party products. Banks often want to short-circuit this section because the vendor already has a deck, a validation memo, and a compliance packet. That material can help the bank understand the product, but it does not transfer accountability for how the workflow is classified, approved, monitored, and used in production.
Community-bank vendor diligence should get more specific after the 2026 revision, not less. A useful diligence conversation maps the product by function — which features are deterministic software, which are model outputs, and which use generative help — and walks through how each component changes over time, what validation evidence exists for the model pieces, what happens when an underwriter overrides the output, and whether the original machine value is preserved in the record.
The bank still owns the final answer on use limits, human review, and whether the workflow is safe enough for live commercial underwriting. That is true whether the vendor sells a focused tool or a broader platform. A polished demo does not answer those governance questions.
Where do AI underwriting tools fit, exactly?
Most commercial AI underwriting tools sit across more than one bucket. Document collection and checklist logic may be deterministic. Spreading and ratio rollups may also be deterministic if they only apply fixed mapping and arithmetic. Risk scoring, prediction, or quantitative exception estimation may still be models. Memo drafting, analyst chat, and borrower-document Q&A may use generative features that the revised guidance explicitly leaves out of scope.
That is why buyers should stop asking “is this an AI underwriting model” as if the whole product has one answer. Better questions are: which feature shapes the credit recommendation, which feature produces a quantitative estimate, which feature only applies fixed rules, and which feature is generative assistance that needs separate boundaries.
The operational side of those workflows lives in AI underwriting use cases; this guide handles the control map that wraps around them.
What about generative and agentic AI if the guidance says they are out of scope?
This is the line banks are most likely to misread. The revised guidance says generative AI and agentic AI are not within the scope of this guidance. That does not mean those features are exempt from governance. It means this bulletin is not the thing that settles those control questions.
If a product drafts memo language, summarizes a borrower file in chat, or answers analyst questions about a tax return, the bank still needs rules on allowed use, source grounding, data handling, human review, and whether the feature can influence a credit package without an underwriter rewriting or approving the output. The bulletin gives you a governance frame for the adjacent model-driven pieces. It does not give memo chat a free pass.
Practical takeaway: even though the revised model-risk bulletin leaves generative features out of scope, banks usually still need separate internal rules for allowable use, borrower-data handling, and human review when those features touch loan files or committee materials.
Checklist
Six diligence questions for community-bank credit teams
- What part of this workflow is actually a model? Get a feature-by-feature map, not a product-level slogan.
- What part is deterministic software? Separate fixed checklist, routing, and arithmetic logic from anything probabilistic or estimate-driven.
- What evidence shows the model pieces work on our files? Ask for bank-side parallel validation on real commercial packages.
- What happens when the tool is wrong? The workflow should preserve the original output, the human correction, the reason, the user, and the timestamp.
- How do vendor changes get re-tested before live use? If no one can answer that cleanly, the bank does not control the workflow.
- Which generative features need separate limits? Memo drafting, analyst chat, and document Q&A should have explicit boundaries even though they are out of scope of this particular bulletin.
The takeaway for community banks
The revised guidance is not a push for big-bank bureaucracy in every community-bank workflow. It asks for cleaner thinking: break the AI stack apart by function instead of treating it as one undifferentiated thing, govern each component on its own terms, and keep the underwriter's control visible in the record.
If your bank does that well, the 2026 change is actually helpful. It gives you a better vocabulary for vendor diligence, a tighter frame for proportional validation, and a more defensible way to explain why some features belong inside the model-risk program while others belong in software controls or generative-AI policy.
That is the posture Aloan is built around. Source-traceable workflows, visible override control, and a clean line between analyst assistance and credit authority. If you want to pressure-test that against your own process, start with the demo or go one level deeper into the playbook.
Frequently asked questions
Is SR 11-7 still the current model risk guidance for bank AI underwriting tools?
No. The revised interagency guidance issued through SR 26-2 and OCC Bulletin 2026-13 superseded SR 11-7 and rescinded OCC Bulletin 2011-12 on April 17, 2026.
Does every AI underwriting feature count as a model under the revised guidance?
No. The revised guidance narrows the definition to complex quantitative methods and excludes simple arithmetic calculations and deterministic rule-based software.
Do community banks still need annual validation for every AI underwriting tool?
No. OCC Bulletin 2025-26 says community banks can tailor validation frequency and scope to risk, and OCC guidance should not be read to require annual model validation.
Are generative AI and agentic AI covered by the revised guidance?
No. The bulletin says they are out of scope. That does not remove governance obligations. It means banks need separate usage rules, data controls, and human review for those features.
Go deeper: Read the examiner readiness guide, the workflow map in AI underwriting use cases, the category-specific financial spreading software page, and the shorter explainer on OCC Bulletin 2026-13.