3 Types of Risk & Impact Assessments, And When to Use Them

A common requirement in many AI standards, regulations, and best practices is to conduct risk and impact assessments to understand the potential ways AI could malfunction or be misused. By understanding the risks, organizations can prioritize and implement appropriate technical, organizational, and legal mitigation measures. While there are standards for these assessments, such as the newly released ISO 42005 standard for AI impact assessments, the guidance is often seen as a barrier by practitioners, and not feasible to conduct at scale for every AI project, even with best-in-class automation software such as Trustible. 

Many organizations resort to implementing escalating levels of risk and impact assessments, where lower risk use cases are quickly approved and higher risk AI uses undergo additional review. 

Trustible has gathered feedback and best practices from across the industry, and observed a few common trigger points for these various assessment levels. Specifically, we view assessment levels within three tiers: 

  • Initial assessments
  • Intermediate assessments, and
  • Advanced assessments

Below we discuss the context for which these tiers apply, processes to consider at each stage, and when escalation to the next tier is appropriate. 

Tier 1: Initial Assessment 

Every new AI use case needs to undergo an initial fact finding analysis to capture the key details of a new AI use case, including how, where, and why AI will be used. Tier 1 assessments can be thought of as a “t-shirt” sizing effort meant to help capture a general sense of the inherent risk of a proposed AI use case. The goal of this approach is partly to decide what level of governance should be applied to the use case, and whether there’s a chance it falls into any specific regulated category, such as high risk use cases under the EU AI Act, or present a significant reputational or brand risk. In order to make this practical, the outputs of these types of assessments are typically just based on a questionnaire and attestation.

Practitioners should think of the initial assessment stage as a combined risk and impact assessment process. You will need to capture enough information to understand the general risk categories for the use case, as well as the broader categories of impacted stakeholders. At this stage, you are assigning an initial risk level, which will be used to determine whether escalating to the next tier of assessment is appropriate. There are plenty of use cases that fall into the low risk category and have limited stakeholder impacts. Identifying these types of use cases earlier can help streamline the overall use case assessment process.  

When to Use It

All proposed AI use cases should undergo a Tier 1 assessment. An effective Tier 1 assessment process is relatively light touch and scalable to help you better prioritize your governance efforts. 

When to Escalate to Tier 2

When the initial assessment rates a use case as medium risk or higher, you should escalate to Tier 2.

Tier 2: Intermediate Risk & Impact Assessment

A tier-2 risk and impact assessment aligns with what many organizations may think of for risk and impact assessments. Risk and impact assessments conducted at this level are more detailed and should provide a deeper understanding of where the use case poses greater risks and/or impacts on stakeholders. At this stage, the risk and impact assessments will be two separate processes because they have different outcomes. Each assessment should document the likelihood of risks or impacts identified, as well as the severity. 

Tier 2 risk assessments will gather additional information on matters such as intended uses, data requirements, as well as system architecture and components. Risk assessments at this stage should map to reasonably foreseeable inherent risks, describing risk mitigations, and capturing residual risk levels. Tier 2 risk assessments should be conducted based on attestation by your organization’s internal experts. Documentation should be maintained and reviewed as part of internal auditing processes. 

Tier 2 impact assessments will enumerate the exact type of individuals, organizations, and communities affected by the AI system, and describe how they will be impacted both positively and negatively. You should establish informal processes to gather feedback from stakeholders that are potentially and actually impacted by the use case at the pre-and post-deployment stage.

When to Use it

Tier 2 assessments are commonly used when conducting due diligence on AI products or services provided by third parties. Tier-2 assessments are appropriate for external-facing AI products and services, especially when the initial assessment rates them at or above medium risk. Complying with certain standards or regulations may also require a Tier 2 assessment, such as ISO 42001 or 42005. Use cases that are considered high risk under the EU AI Act, or systems that qualify as general purpose AI systems with systemic risk should also undergo a Tier 2 assessment. 

When to Escalate to Tier 3

Use cases applied in highly sensitive circumstances (i.e., decision making for heavily regulated industries or that require sensitive personal information) or that rate as very high during Tier 1 or 2 should be escalated to Tier 3. 


Tier 3: Advanced Risk & Impact Assessment

Tier 3 assessments are the most rigorous and formal category of assessments. Assessments at this level should use quantified metrics to measure inherent and residual risks, mitigations, severity and likelihood. Tier 3 assessments are conducted by internal experts and subject to third-party audits or outsourced to third parties altogether. It should be noted that the science for conducting this level of assessment is under development, which limits their practicality. 

Tier 3 risk assessments involve all the same functions as a Tier 2 assessments, but will also seek to quantifiably measure inherent and residual likelihoods and severity levels when possible. Tier 3 risks assessments are often scrutinized or audited by third parties.   

Tier 3 impact assessments will follow the same processes as Tier 2 assessments but should include a formalized stakeholder engagement process where groups that may be impacted can provide feedback before a system has been deployed. Your engagement process may also include an opt-in, pre-deployment pilot program. Stakeholder engagement should be ongoing post-deployment and continuously assessed to understand whether new groups or individuals are being impacted by the use case. Similar to Tier 3 risk assessments, your impact assessments at this stage may be subject to third party scrutiny.   

When to Use it

Tier 3 assessments are reserved for the highest risk use cases. Use cases that undergo Tier 3 assessments likely make or impact decisions that are difficult to undo or irreversible. Industries that face intense regulator scrutiny for their products or services, or that face routine audits, will also likely apply Tier 3 assessments to their AI use cases when they are integrated into those products or services. In some instances, certain use cases will be prohibited in a specific jurisdiction but not another (i.e., the EU AI Act’s list of prohibited systems). These “prohibited use cases” should also be subject to Tier 3 assessments when they are deployed in jurisdictions that do not ban them. We also expect many government entities will conduct this level of assessment for certain AI use cases that have no opt-out option and are used in critical sectors.

Share:

Related Posts

Why AI Governance is the Next Generation of Model Risk Management

For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.

Read More

Should the EU “Stop the Clock” on the AI Act?

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

Read More