Here is something that is almost certainly happening at your institution right now: an analyst gets a 1065 with a complex K-1 structure. They are behind on the deal. They open ChatGPT, paste in pages from the tax return, and ask it to help trace the distributions.
Or they upload financial statements and ask it to flag inconsistencies. Or they feed it a rent roll and a set of operating statements to cross-check the expense ratios.
They are not being reckless. The tools are available, the pressure is real, and the alternative is another two hours of manual work on a deal that needed to be in front of credit committee yesterday.
The Analysts Doing This Are Your Best People
This is the part that matters. The people reaching for ChatGPT are not your weakest links. They are the resourceful ones. The ones who figure out how to get the work done when the volume exceeds the headcount.
Think about why they are doing it. A single commercial deal can generate 500 to 1,000 pages of tax documents across entities and years. Three years of returns per entity, multiple entities, multiple guarantors. That is one to two full days of manual spreading before anyone applies any credit judgment. Multiply that across a pipeline of 20 or 30 active deals and you start to understand the math. The use cases for AI in underwriting are not theoretical. They are being invented in real time by analysts under deadline pressure.
Hiring does not fix it. Good commercial underwriters already have jobs. Junior analysts take six to twelve months to ramp, and during that period they consume senior analyst time instead of freeing it.
So your senior people find shortcuts. That is not a character flaw. That is an operational signal.
The Governance Gap Is the Actual Problem
The issue is not that people are using AI. The issue is that borrower tax returns, personal financial statements, and guarantor information are flowing into consumer tools with no data governance, no audit trail, and no way to explain to an examiner what happened to that data.
If a regulator asked "what tools are your analysts using to support their underwriting work?" the honest answer at most institutions would be uncomfortable.
This is not a hypothetical concern. The OCC's third-party risk management guidance (OCC 2023-17) and the Federal Reserve's SR 11-7 on model risk management have brought AI governance into explicit focus for community banks. If you are using AI tools in any part of the lending process, examiners now expect documentation, governance, and oversight proportionate to the risk. "We did not know our analysts were using ChatGPT" is not an answer that holds up.
And the compliance exposure is avoidable. That is what makes this frustrating. The gap between "AI tools that analysts actually use" and "AI tools the institution has sanctioned and governed" is a solvable problem. Most institutions just have not gotten around to solving it yet.
What Governance Actually Looks Like Here
Governance does not mean banning AI. Banning it will not stop the behavior. It will just push it further underground. People who need to get deals done will find ways to get deals done.
Governance means giving your team tools that meet the same need ChatGPT is meeting, but inside a framework that an examiner can review. That comes down to three things:
- Data containment. Borrower data stays within systems you control. No PII flowing to consumer AI products. Your data security posture should be documented and auditable.
- Explainability. Every AI-generated number traces back to a source document and page. When an examiner pulls a file, every figure in the credit memo has a citation. Not "the AI said so" but "page 3 of the 2024 1065, line 22."
- Human-in-the-loop by design. AI handles the data extraction and spreading. Humans own the credit judgment. The system makes the division of labor explicit and auditable. No black boxes making credit decisions.
The Federal Reserve's SR 11-7 guidance on model risk management requires institutions to validate any model — including AI — used in credit decisions, with independent review and ongoing monitoring.
The AI-Assisted Underwriting Playbook covers these requirements in detail, including the regulatory framework under SR 11-7 and OCC 2025-26, and a 30/60/90-day implementation timeline. We are also publishing a deeper governance-specific guide soon that walks through building a proportionate AI governance framework for community banks.
The Uncomfortable Middle Ground
Most institutions are stuck in an in-between state that is worse than either alternative. They have not formally adopted AI tools for underwriting. But they also have not effectively prevented their use. The result is ungoverned AI usage with no audit trail, no data controls, and no institutional awareness of what is actually happening.
Compare that to two cleaner positions:
| Position | Reality |
|---|---|
| No AI, enforced | You accept the manual workload and the turnaround times. Defensible, but your best analysts start looking at shops that give them better tools. |
| AI adopted with governance | You give analysts purpose-built tools inside a governed framework. Data stays contained, every output is traceable, and you have documentation ready for the next exam. |
The middle ground, where AI is being used but not governed, carries the risk of both positions and the advantages of neither.
What To Do About It
Start by acknowledging what is already happening. Have an honest conversation with your underwriting team. Not accusatory, just direct: what tools are you using, and where are the pain points driving you to them?
Then evaluate whether purpose-built alternatives can close the gap. The specific use cases where AI is already in production at lending institutions are well documented: tax return spreading and financial analysis, global cash flow assembly, document extraction, DSCR calculation, and credit memo drafting. These are the same tasks your analysts are trying to solve with ChatGPT, just without the data leakage and audit risk.
The starting point matters too. Spreading is the highest-value place to begin because it is where the most time is spent, the most manual errors occur, and the governance requirements are the most straightforward. You can build your governance framework around a well-scoped use case and expand from there, rather than trying to boil the ocean with a full AI strategy on day one.
FAQ: AI governance in commercial lending
Is it safe for bank analysts to use ChatGPT on loan files?
Using consumer ChatGPT on loan files creates significant risks: borrower data may be stored on OpenAI's servers, there is no audit trail for examiners, outputs cannot be validated against source documents, and the institution has no governance over how the tool is used. OCC guidance and interagency expectations requires banks to have documented AI governance frameworks before deploying AI in lending workflows.
What does the OCC say about AI in commercial lending?
OCC guidance and interagency expectations and the interagency guidance on AI in banking require institutions to implement model risk management (per SR 11-7), maintain explainability and audit trails, validate AI outputs, and document governance procedures. The OCC does not prohibit AI in underwriting but requires the same rigor applied to any model used in credit decisions.
How should banks govern AI use in underwriting?
Banks should establish a formal AI governance framework that includes: approved tools and use cases, data handling policies that prevent borrower information from leaving secured environments, audit trails linking every AI output to source documents, model validation procedures, and examiner-ready documentation. Enterprise AI platforms like Aloan provide these controls by default.
What is the difference between consumer AI and enterprise AI for lending?
Consumer AI tools like ChatGPT process data on shared infrastructure with no guarantee against training on your data, no audit trails, and no compliance controls. Enterprise AI platforms use isolated processing environments (e.g., Google Cloud Vertex AI, AWS Bedrock) where data is encrypted, never used for model training, and every output is traceable to source documents.
Going deeper? The AI-Assisted Underwriting Playbook covers governance frameworks, six production use cases, regulatory expectations under SR 11-7 and OCC 2025-26, and a 30/60/90-day implementation timeline for community banks.