Most banks should buy. A few should build. The framework below walks through how to tell which one you are, with cost ranges that reflect what real builds typically run rather than what an internal pitch deck shows. The figures are illustrative — your numbers depend on scope, talent market, and document volume — but the order of magnitude is consistent across the banks we have seen attempt this.
If you are still scoping where AI fits in underwriting at all, start with the companion guide on AI underwriting use cases already in production. If you have decided on AI and are now scoping governance, the examiner readiness guide covers the current model risk framework. This guide assumes the AI question is settled and the open one is how to deliver it.
Decision framework
Score 1 to 5 on each question
| Question | 1 (Buy) | 5 (Build) |
|---|---|---|
| Bank asset size | Under $10B | Top 10 in country |
| ML engineers on staff | None | 20+ |
| Time to first underwritten deal you can wait for | Weeks | 24+ months |
| Annual budget you can dedicate | Under $500K | $5M+ ongoing |
| SR 11-7 documentation tolerance | Low | Dedicated model risk team |
| Uniqueness of underwriting workflow | Standard commercial / SBA | Truly unique product |
| Strategic intent on technology | Focus on customer relationships | Want to be a fintech |
| Engineering opportunity cost | High (other priorities pressing) | Low (excess capacity) |
Scoring rule: under 16, buy. 17 to 28, buy and revisit in three years. 29 or higher, a build can be defensible — typically only at the largest, most engineering-heavy institutions.
Section 01
What "building" actually means in 2026
The picture most banks have of building AI underwriting is wrong. It is not "hire two engineers and ship a model in six months." It is a multi-year program with five distinct workstreams that all have to succeed at once.
1. Document ingestion
A commercial credit packet is not a clean dataset. It is a borrower emailing 14 PDFs over three weeks, where some are scanned at 150 DPI, some are photos of statements taken on a phone, some are protected Excel files, and at least one is the wrong borrower's tax return. Building a system that handles this reliably requires OCR, layout analysis, document classification, and a pipeline that can flag and route exceptions. Internal estimates routinely understate the engineering required to move from a clean demo to the long tail of real-world inputs.
2. Extraction and spreading
Once documents are classified, you need to pull line items out of them. Form 1040 has a specific shape. Form 1065 with Schedule K-1 has a different shape and needs to be linked to the 1040 of each partner. A 1120-S has yet another shape. Tax forms change every year. State and local forms vary. Building extraction that gets to 95% accuracy across the long tail takes a year of iteration with a dedicated team and labeled data you do not have.
3. Cash flow and credit analysis
Spreading numbers is the easy part. Reconciling them across entities, eliminating intercompany items, building global cash flow that an examiner will accept, and surfacing the analysis an underwriter uses to make a decision is harder. Most of that work is encoding credit policy in software, not building models. Every analyst has opinions about how to handle distributions, depreciation add-backs, and contingent guarantees, and codifying those opinions into consistent output is its own multi-month effort.
4. Examiner-ready output
Every number in a credit memo needs to trace back to a source document, a specific page, and a specific line. Examiners ask. Internal model risk teams ask. Model risk management standards (the post-April-2026 framework that replaced SR 11-7) require documentation, validation, ongoing monitoring, and governance for any model used in a credit decision. That work is engineering, not AI research, and it is roughly half the build.
5. Maintenance
Maintenance is the workstream most build estimates skip. Tax forms change. New SBA SOPs come out. A core system update changes the format of the file you were parsing. A new foundation model lands and your existing prompts perform worse on it than the old one. Without three to five people dedicated to maintenance and continuous improvement, a system that worked in Year 1 degrades by Year 2.
The honest framing: building AI underwriting is an ongoing capability that has to be staffed and funded indefinitely, not a one-time project.
Section 02
The hidden costs that internal estimates usually skip
When a bank asks for a build cost estimate from an internal team or a consultancy, the headline number is usually missing the items that determine whether the project succeeds. The ranges below are illustrative orders of magnitude based on common build patterns; your actual numbers depend on talent market, document volume, and how much of the work you outsource. Treat them as a sanity check, not as a quote.
- Talent. Senior ML engineers with relevant experience are expensive on a fully loaded basis, and you need at least two of them, plus a backend engineer, a frontend engineer, and a product manager who understands lending. Add a credit subject-matter expert who can label training data and write rules. That is a multi-million-dollar annual run rate before shipping anything, and you are competing with the largest tech companies for the same people.
- Data labeling. AI models for document extraction need labeled training data. A useful commercial-lending corpus is several thousand hand-annotated documents reviewed by people who understand what they are looking at. Annotation is per-document and skilled labor; expect a six-figure outlay in Year 1 and recurring spend after that as document types change.
- Compute and infrastructure. Calling a foundation model on every document at scale costs money. Storing PII-laden documents securely, running them through a pipeline, and retaining them per record-keeping requirements costs more. Budget for a recurring infrastructure line that scales with deal volume.
- Model risk and compliance. Model validation for anything used in a credit decision requires independent review, documentation, and ongoing monitoring. The work is recurring and falls on internal headcount, external consultants, or both, plus the engineering time to produce the documentation packages they need.
- Security and audit. A system that ingests borrower PII, integrates with your core, and produces credit decisions will be reviewed by internal audit, external auditors, your cyber insurance carrier, and examiners. Each review costs internal hours and often surfaces remediation work.
- Opportunity cost. Every engineer working on the underwriting platform is not working on online banking, fraud detection, treasury management, or any other priority. For most banks, the opportunity cost is the largest hidden expense and the one that rarely appears on the spreadsheet.
A build project that looks like $1.5M on the headline almost always carries a Year 1 spend in the low single-digit millions, an annual run rate in a similar range, and 18 to 24 months before the first credit memo gets generated. That is the realistic baseline before any comparison to a vendor.
Illustrative ranges · not vendor quotes
| Cost line | Year 1 (low) | Year 1 (high) | Annual run rate |
|---|---|---|---|
| Engineering team | $1.5M | $2.5M | $1.5M to $2.5M |
| Data labeling | $150K | $500K | $50K to $200K |
| Compute and infrastructure | $200K | $500K | $200K to $500K |
| Model risk and compliance | $100K | $300K | $100K to $300K |
| Security and audit | $100K | $250K | $100K to $250K |
| Total (excluding opportunity cost) | ~$2M | ~$4M | $2M to $4M |
Section 03
What buying actually means in 2026
Buying AI underwriting from a vendor in 2026 is different from buying enterprise software in 2015. The good vendors deploy in days, not months. They do not require LOS migration. They work alongside your existing systems and start producing output on real deals immediately.
- Time to first value. Days to weeks, not months to years. Upload a real deal, see the credit memo come back, hand it to an underwriter, see what they think. The decision to expand or kill becomes a fact-based one within a single quarter, not a multi-year bet.
- Pre-trained models. A vendor that has processed tens of thousands of commercial credit packets has already solved the document classification, extraction, and spreading problems for the long tail of edge cases that would take you 18 months to encounter. You inherit that work on day one.
- Examiner-ready output included. The good vendors learned from their first ten customers what examiners want to see, baked it into the output, and iterate continuously as exam practices evolve. You are not figuring out audit trail design from scratch.
- Integration without migration. Work with the LOS, core, and document repository you already have. No replacement project, no re-training, no change management for your lenders.
- The vendor handles maintenance. Tax form updates, new SBA SOPs, foundation model improvements, security patches. None of that lands on your engineering team's roadmap.
- Pricing scales with use. Volume-based or deal-based pricing means cost grows when revenue grows, and stays low when activity is light. There is no $2M annual run rate sitting on the budget regardless of how many loans got underwritten.
- You can stop. If a vendor is not working out, you stop paying and switch. If an internal build is not working out, the options are scrap and start over, scrap and buy, or live with a half-finished system — all of which carry sunk cost and political baggage.
Section 04
When building actually makes sense
There are real cases where a build is the right call. Be honest about whether you are in one of them.
You are a money-center or top-tier regional bank with deep ML capacity
Dozens of ML engineers, proprietary data at scale, and a multi-year horizon. The largest banks already build, and they do so because they want to differentiate on credit and have the capacity to absorb the cost of being wrong.
You have a workflow no vendor handles, and that workflow is core to your strategy
You underwrite a specialized credit product where standard tools genuinely do not apply. Even then, the right move is usually to build only the parts that are unique to you and buy the rest. The bulk of an AI underwriting stack — document ingestion, spreading, cash flow — is not where differentiation lives.
You have a regulator-approved internal model risk framework that requires in-house ownership
A small number of institutions sit in this category and they know who they are. For most banks, vendor-supplied models with proper validation packages meet model risk expectations just as well as internal builds.
You can absorb a multi-year, multi-million-dollar investment before first value
Plus a comparable annual run rate after launch. If the CFO has signed off on that math with eyes open and the strategic case justifies it, building is defensible.
Outside of those four cases, the build math rarely holds up under honest review.
Section 05
When buying wins (most banks)
The case for buying is not that vendors are inherently better than internal teams. It is that the math of building does not work for most institutions, and the practical alternative is to use a vendor that has already solved the underlying problem.
- You want value in months, not years. Borrowers and competitors are not waiting two years for an underwriting team to get faster.
- You do not have ML talent in-house. Most community and regional banks do not, and the engineers willing to work on commercial lending specifically are rare. Hiring against tech-company compensation in this segment is a multi-year battle. A vendor's whole business is having those people.
- You need examiner-ready output and do not want to invent it. A vendor that has been through dozens of FDIC and OCC exams with its customers has learned what examiners want, where they push back, and which documentation patterns hold up.
- You want internal engineering working on what is unique to your bank. Online banking, treasury services, fraud detection, customer-facing tools — the things competitors cannot buy. Underwriting infrastructure is not on that list for most banks.
- You want optionality. A vendor relationship can be expanded, contracted, or replaced. A built system is a permanent line on the operating budget.
- You want cost to track value. Volume-based pricing means underwriting cost moves with underwriting activity. Built systems carry the same fixed cost at 100 deals as at 1,000.
Run the framework against an honest scorecard and the answer for most banks is to buy.
Section 06
What to look for in an AI underwriting vendor
Once you have decided to buy, the vendor evaluation matters. Most lending automation tools call themselves "AI" but mean very different things by it. Useful filters:
- Does it actually read documents and produce analysis, or does it manage workflow around manual data entry? A tool that routes a document to a human who types numbers into a template is workflow software, not AI underwriting. The right tool reads the document, extracts the data, and produces the spread.
- Does every number in the output trace back to a specific source page and line? This is the examiner test. If the answer is "no" or "kind of," the tool will fail in your first exam.
- How long is deployment from signed contract to first real deal in production? If the answer is measured in months, the tool is closer to a platform replacement than an add-on deployment. The right answer is days to weeks.
- Does it work with your existing LOS and core, or does it require migration? Migration is risk. Tools that work with what you have minimize the risk and let you compare honestly to your current process.
- How is it priced? Volume-based or deal-based pricing aligns the vendor's cost with your activity. Per-seat or fixed enterprise pricing rewards the vendor regardless of whether you actually used the product.
- What is the maintenance commitment? Does the vendor update for new tax forms, SBA SOP changes, and regulatory shifts as part of the subscription? Or is it your problem to detect when the system breaks?
A vendor that scores well on those filters is doing the work you would otherwise have to staff internally for years. For a deeper look at the segment, see the comparison of best AI underwriting platforms for community banks and the breakdown of commercial lending software that competes for these workflows.
Section 07
Common objections to buying, and honest answers
"Our data is unique. A vendor's models will not perform on it."
Probably untrue. Commercial credit documents look broadly the same across banks. K-1 distributions trace the same way. Tax returns have the same shape. The underwriting policy on top is where banks differ, and the right vendor lets you encode that policy without retraining the model.
"Vendor lock-in is a risk."
Real, but smaller than build lock-in. If you are unhappy with a vendor, you can switch in 60 to 90 days. If you are unhappy with an internal build, the only options are scrap and rebuild, scrap and buy, or limp along with what you have. Vendor lock-in is a contract term. Build lock-in is sunk cost and political capital.
"What if the vendor goes out of business?"
Fair. Mitigations: source code escrow for critical pieces, a contractual exit clause that gives you 90 days of continued service if the vendor is acquired or shuts down, and the ability to export your data and audit trail in standard formats. The good vendors offer these as standard.
"We need control over the credit decision logic."
You should. The right vendor lets you configure thresholds, scoring rules, and policy without engineering effort. You configure your credit policy. The vendor provides the underlying reading and spreading. Both are needed and they are not the same problem.
"We are worried about borrower data security."
A SOC 2 Type 2 audited vendor with encryption at rest and in transit, role-based access, and a published incident response process is almost certainly more secure than what your internal team would build with the same resources. Ask for the audit reports. Read them.
Section 08
How the comparison usually plays out
The board-meeting version of build vs. buy is a slide with two columns. The honest version is a CFO running expected loan volume against engineering run rate, factoring in the lag before either option produces a single underwritten deal. Vendor quotes for AI underwriting platforms at the community and regional bank scale typically come in well under a single year of an internal team, with deployment measured in weeks. Internal build proposals come back with multi-quarter timelines and run rates that compound past Year 1.
The build pitch tends to sound better in the room because it uses words like "proprietary" and "differentiated." The pitch loses ground once the cost of waiting for value is priced in alongside the talent and maintenance commitments described above.
The takeaway is not that vendors always win. It is that the framework above, applied honestly, points the same direction for the large majority of community and regional banks.
How this works in practice: Aloan is the buy option built around the criteria above. Document ingestion, financial spreading, global cash flow, and credit memo drafting all happen automatically with source-page citations on every number. It deploys in days alongside existing systems, ships SR 11-7 validation packages out of the box, and prices on volume so cost tracks activity. Pressure-test it on your own deals via the SBA underwriting workflow, the CRE loan analysis workflow, or automated credit memo generation.
FAQ
Frequently asked questions
Should community banks build or buy AI underwriting?
For nearly all community banks, buy. Build cost runs into the millions in Year 1 with comparable annual run rates, requires ML talent that is hard to hire and harder to retain, and takes 18 to 24 months before producing the first credit memo. The same outcome is available from a vendor in days to weeks at a fraction of the cost. The exception is a community bank with a truly unique underwriting workflow that no vendor handles, and even then, building only the unique pieces and buying the rest usually wins.
What does it actually cost to build an AI underwriting platform in-house?
Year 1 typically runs into the low millions when you include ML engineers, data labeling, infrastructure, model risk and compliance work, and audit costs. Annual run rate after Year 1 stays in the low millions for ongoing engineering, maintenance, and validation. Exact numbers depend on scope, talent market, and document volume; the figures in this guide are illustrative ranges, not vendor quotes. These numbers also assume the project succeeds — many internal AI builds at banks do not.
How long does it take to deploy AI underwriting from a vendor?
The right vendor can have you processing real deals within days to weeks of contract signature. The deployment is not an LOS migration. It is account setup, document upload configuration, and connection to your existing systems. Compare to 12 to 24 months for an internal build to reach first production use.
What are the regulatory implications of buying AI underwriting versus building?
For US banks, model risk management standards apply to any model used in credit decisions, regardless of whether it was built or bought. The post-April-2026 framework that replaced SR 11-7 sets the validation, documentation, and monitoring expectations. The good vendors provide validation packages, documentation, and ongoing monitoring reports designed against those expectations. Building does not exempt the bank from the same requirements; it shifts the work to internal staff. See the examiner readiness guide for what that program needs to contain.
What if our bank has unique credit policy that a vendor cannot accommodate?
The right tools separate underwriting infrastructure (document processing, spreading, cash flow) from credit policy (thresholds, scoring rules, decisioning logic). The infrastructure is broadly the same across banks. Credit policy is what differentiates you and should be configurable in the vendor tool, not hard-coded. If a vendor cannot accommodate your policy, that is a vendor selection problem, not a build-vs-buy problem.
Can we use a vendor for some workflows and build for others?
Yes, and for some banks this is the right answer. Buy for the parts that are common across institutions (document ingestion, financial spreading, examiner-ready output) and build only the pieces that are genuinely unique to your bank. This minimizes the build scope to the parts where building actually creates value, and avoids paying the build cost for the 80% that vendors have already solved.
What happens to our internal data if we use a vendor?
A reputable vendor stores your data in a SOC 2 audited environment with encryption, access controls, and a clear data ownership policy. Your data remains your data. The vendor processes it on your behalf, retains it per your record retention policy, and provides export capabilities so you can move it if you change vendors. Verify these terms in the contract before signing.
How do we evaluate AI underwriting vendors?
Six criteria matter most. First, does the tool actually read documents and generate analysis, or does it just manage workflow. Second, does every number in the output trace to a source document. Third, can it deploy in days to weeks, not months. Fourth, does it work with existing systems rather than requiring migration. Fifth, is pricing volume-based to align with activity. Sixth, does the vendor own ongoing maintenance for tax forms, SOPs, and regulatory changes. A vendor that scores well on these is doing the work you would otherwise staff internally for the next three years.
What is the risk of vendor lock-in?
Smaller than the risk of build lock-in. If a vendor relationship does not work out, switching takes 60 to 90 days. If an internal build does not work out, the alternatives are scrap and rebuild, scrap and buy, or live with a half-finished system indefinitely. Mitigate vendor lock-in with contractual data export rights, source-code escrow for critical components, and a 90-day continuation clause for acquisition or shutdown scenarios. These are standard asks.
Is AI underwriting examiner-friendly?
When done right, yes, and often better than manual underwriting. Examiners want to see how every number was derived, what source documents were used, and what assumptions were made. AI tools that include source-page citations on every calculated number produce more consistent and more thoroughly documented credit files than manual processes typically do. The current interagency model risk framework (SR 26-2 and OCC Bulletin 2026-13) sets out the validation and monitoring expectations a bank needs to satisfy when AI is used in a credit decision.
Go deeper: the AI-Assisted Underwriting Playbook ties this build-vs-buy framing into a full implementation sequence. For workflow-by-workflow breakdowns of where AI is already in production, see the use cases guide. For the regulatory side of the same decision, see examiner readiness for AI lending. Browse the full guides hub for the complete reading path.