*April 2026 | Risk & Regulatory Affairs*
---
The Federal Reserve, OCC, and FDIC just issued proposed guidance that is set to modernize the foundational framework governing how banks manage model risk. SR 26-2, if finalized, would represent the most significant update to the Model Risk Management (MRM) framework since SR 11-7 was published in 2011 — and for good reason. The financial industry looks very different today than it did fifteen years ago: AI and machine learning have moved from novelty to mainstream, third-party model vendors are embedded in nearly every institution's technology stack, and the sheer volume and complexity of models in use has grown exponentially.
This post breaks down what is changing, what is staying the same, and — most importantly — what it means for your institution depending on where you fall on the asset size spectrum.
---
## What SR 11-7 Got Right (And Why It Needed an Update)
SR 11-7 established the three pillars of model risk management that the industry has operated under for over a decade: conceptual soundness, ongoing monitoring, and outcomes analysis. It introduced the discipline of model validation as a formal function, distinct from model development, and set expectations for governance, documentation, and independent challenge.
Those principles have held up. SR 26-2 does not replace them — it builds on them. What the 2011 guidance could not anticipate was the emergence of black-box AI models, the explosion of vendor-built and third-party models, and the need for near-real-time monitoring in an environment where model inputs and market conditions can shift rapidly.
SR 26-2 addresses all of that.
---
## Key Changes Under SR 26-2
### 1. Model Validation Gets Sharper Teeth
Under SR 11-7, validation was periodic and broadly risk-based. SR 26-2 sharpens expectations considerably, tying the intensity and nature of validation directly to model complexity, materiality, and risk tier.
For traditional statistical and quantitative models, the core validation framework remains familiar. But for AI and machine learning models, the guidance calls for specialized techniques that address risks SR 11-7 simply was not designed to handle — explainability (can you describe why the model produced a given output?), fairness and bias testing, and data drift (the phenomenon where the statistical properties of input data shift over time, degrading model performance without any change to the model itself).
The practical implication: institutions relying on AI/ML models for credit decisions, fraud detection, or stress testing will need to invest meaningfully in validation tooling and expertise that go well beyond back-testing and benchmarking.
### 2. Monitoring Becomes a Continuous Function
One of the most operationally significant shifts in SR 26-2 is the elevation of model monitoring from a periodic review activity to a near-continuous, dynamic process. The guidance calls for automated monitoring of model performance, data inputs, and outputs, supported by defined thresholds that trigger specific actions when breached — review, recalibration, or re-validation.
This is a meaningful departure from how most institutions currently operate. In practice, many banks run monitoring reports quarterly or annually, with ad hoc review triggered by significant market events or model failures. SR 26-2 envisions something closer to an early warning system — one that surfaces performance degradation and concept drift before it becomes a material risk event.
For MRM teams, this means building (or buying) monitoring infrastructure capable of flagging issues in near-real-time, not just producing periodic reports.
### 3. Vendor and Third-Party Models Lose Their Special Treatment
This may be the most operationally disruptive change for mid-sized and larger institutions. Under the current framework, many organizations apply a lighter validation standard to vendor models, citing limited access to underlying code, methodology, or development data. SR 26-2 closes that gap. Vendor and third-party models are expected to receive the same level of validation rigor as internally developed models.
That means developing a genuine understanding of the model's conceptual soundness, conducting ongoing monitoring and outcomes analysis, and — where customizations have been made — documenting, justifying, and evaluating those adjustments as part of the formal validation process.
For institutions that rely heavily on vendor-supplied models for activities like CECL reserving, credit scoring, or fraud detection, this will require significant negotiation with vendors around data access and transparency, as well as investment in the internal capacity to challenge and validate models where full transparency is unavailable.
### 4. CCAR Stress Testing Faces Holistic Scrutiny
CCAR models have always been among the most scrutinized in any large bank's portfolio. SR 26-2 raises the bar further by extending expectations beyond individual model validation to the governance of the entire stress-testing process — including scenario design, the quality and appropriateness of expert judgment embedded in macroeconomic assumptions, and the aggregation of results across the portfolio.
Dynamic validation is a new expectation: CCAR models must be validated not just for general performance, but for their stability and sensitivity under the specific severe scenarios used in each annual test. Banks running CCAR programs should expect more examiner scrutiny on the front end of the process — how scenarios are designed and challenged — not just on model outputs.
### 5. Governance and Accountability Move Up the Org Chart
SR 26-2 formalizes governance accountability in ways that SR 11-7 left somewhat vague. Boards and senior management are explicitly expected to own model risk management, not simply receive reports from it. The guidance calls for clearer delineation of roles and responsibilities across the model lifecycle, with particular attention to potential conflicts of interest between model development and validation functions.
For large institutions with complex organizational structures, this may require revisiting how MRM reports to the board, how model risk appetite is articulated and monitored, and whether the current governance framework is genuinely integrated or merely ceremonial.
### 6. A Notable Carve-Out: Generative AI and Agentic AI
SR 26-2 explicitly excludes generative AI and agentic AI models from its scope, citing their novelty and rapid evolution. This is a meaningful acknowledgment by regulators that the risk profile of large language models and autonomous AI agents is fundamentally different from traditional statistical models — and that the industry and regulators alike need more time to develop appropriate frameworks.
That said, the guidance is clear that institutions are still expected to apply sound risk management and governance principles to any tools, processes, or systems outside the scope of SR 26-2. The carve-out is not a free pass; it is a deferral.
---
## What This Means by Institution Size
Not all institutions face the same level of exposure to SR 26-2. The guidance itself is explicit about scope — and size is the primary determinant.
### Large Banks and Complex Credit Unions ($30B+ in assets)
**Fully in scope. Immediate action required.**
Institutions at this tier — think Bank of America, JPMorgan, M&T Bank, Navy Federal Credit Union — face the full weight of SR 26-2's expectations. For MRM teams at these organizations, the guidance signals:
- A likely need to overhaul AI/ML validation frameworks, particularly around explainability and bias testing
- Investment in automated monitoring infrastructure capable of near-real-time performance tracking
- A significant effort to bring vendor model validation up to the standard now applied to in-house models
- Increased board-level engagement on model risk appetite and governance
The headcount and technology implications are real. Institutions that have been running lean MRM functions should expect that SR 26-2's finalization will drive demand for specialized validation talent and tooling across the portfolio.
### Mid-Tier Banks and Credit Unions ($10B–$30B in assets)
**Conditional scope. Gap assessment recommended now.**
The guidance is formally targeted at institutions above $30 billion, but mid-tier institutions should not interpret that threshold as a clean exemption. The language of SR 26-2 explicitly preserves the ability for examiners to apply its principles to smaller institutions that carry significant model risk due to portfolio complexity or non-traditional activities.
For institutions in this range, the most pressing concerns are:
- **Vendor model reliance**: Mid-tier institutions tend to rely heavily on vendor models for credit, CECL, and fraud functions. The heightened third-party validation expectations in SR 26-2 are directly relevant, even for institutions below the threshold.
- **Monitoring gaps**: Many institutions in this tier lack the automated monitoring infrastructure that SR 26-2 envisions. Examiners are likely to apply the spirit of the guidance during safety and soundness exams, regardless of formal scope.
- **Proactive gap assessment**: The time to evaluate your current MRM framework against SR 26-2 expectations is now — before finalization, while there is still time to comment on the proposal and plan remediation without being in reactive mode.
### Community Banks and Small Credit Unions (Under $10B in assets)
**Generally exempt. Monitor for finalization.**
Community banks and small credit unions are formally excluded from SR 26-2 under the agencies' stated commitment to a tailored supervisory approach. The guidance acknowledges that models used by institutions in this tier are typically subject to internal governance practices proportionate to their size and risk profile — and that this is appropriate.
There is one meaningful exception: institutions below $10 billion that have expanded into non-traditional activities, rely on complex third-party models, or have model portfolios that look more like those of larger institutions may still attract examiner attention. For most community institutions, however, the practical impact of SR 26-2 is limited.
The prudent approach is to monitor the proposal through the comment period, note any final changes from the current draft, and evaluate whether your institution's model governance practices are adequately documented and defensible — not because SR 26-2 requires it, but because sound practice does.
---
## The Bottom Line
SR 26-2 does not reinvent the wheel. The foundational principles of SR 11-7 — rigorous validation, independent challenge, sound governance — remain intact. What changes is the level of specificity, the intensity of expectations for complex models, and the scope of what counts as adequate oversight in a world where AI, third-party models, and continuous data monitoring have become standard features of the financial services landscape.
For MRM professionals at large institutions, the guidance is a call to action. For mid-tier institutions, it is a preview of where examiner expectations are heading. For community banks and credit unions, it is a signal of the direction of travel — even if the destination is not yet required.
The comment period is the moment to engage. The time to build your readiness roadmap is now.
---
*The information in this article is based on SR 26-2 as currently proposed. The guidance has not been finalized and is subject to change. This post does not constitute legal or regulatory advice. Consult your legal counsel and compliance team for institution-specific guidance.*
Back to Articles
SR 26-2: The Biggest Shake-Up to Model Risk Management Since 2011
4/22/2026
9 min read