top of page

4 Ways to Prepare for Upcoming AI Regulations

Apr 24, 2023

4 min read

0

168

0




Governments and regulators around the world are taking a closer look at how to manage the AI’s potential risks and benefits. While regulations vary by country and industry, it's important for companies developing and deploying AI to stay ahead of the game and prepare for potential regulatory challenges.


The most significant regulation proposed to govern AI is the European Union's AI Act. The act aims to ensure that AI is developed and used in a way that is safe, transparent, and aligned with European values and fundamental rights. Failure to comply with the regulations could lead to significant legal and financial consequences for companies, as well as reputational damage, legal liabilities, and loss of business opportunities. While the law will mostly affect organizations developing high-risk AI systems, recent developments of the proposed regulation will likely apply to applications leveraging foundational or generative AI models as well.


In the United States, the Democratic Party’s leadership has expressed its intent to introduce a federal law regulating AI, though we expect the likelihood of such a law to pass through Congress as low. Federal agencies such as the FDA, FTC, and NTIA are introducing rules for AI within their enforcement jurisdictions.There is also a strong possibility that the NIST AI Risk Management Framework becomes the universally recommended (though not enforced) standard for managing AI risk. In Canada, the AI Data Act (AIDA) is already gaining momentum and support from the broader AI community in the country and could be enacted as quickly as 2023.


While national or international AI regulations are still forming, there are still a myriad of existing laws that are claiming jurisdiction over the technology. A financial services company using AI to evaluate creditworthiness could be subject to the Equal Credit Opportunity Act (ECOA) if the AI system results in discriminatory practices or criteria. Recently, Italy temporarily banned ChatGPT amid a probe into a suspected breach of Europe’s strict privacy regulations. In Japan, the Product Liability Act may hold the developer or operator of the AI system liable if damages arise from tangible objects (such as machinery or robotics).


Managing the rapidly evolving regulatory landscape will be a complex task for global businesses and governmental institutions. It is important to consult your legal partners to better understand the current state of AI regulations and ensure you have a dedicated team with the legal and technical expertise necessary to comply with these requirements.


Despite the regulatory uncertainty, your organization can get ahead of the upcoming AI regulations by implementing these 4 steps to manage risk and build trust.


  1. Create your organizational AI policies officialize internal policies and procedures to build trust in your AI systems. This includes implementing a vendor review process for 3rd party AI systems, defining your roles for who reviews proposed AI use cases, establishing guidelines for ethical AI development and use, and developing a process to determine how your AI affects impacted stakeholders. These policies should be documented, updated, and accessible to internal stakeholders.

  2. Set up an AI inventory construct an internal inventory to understand where AI is being used across your organization and where you may have risk exposure. Identifying which of your AI use cases are likely to be categorized as a ‘high risk’ use case according to the EU AI Act, will help your organization prioritize resources for the future. Using the Trustible AI Governance platform, your organization can centralize all of your use cases and applications of AI in a single source of truth, categorize risks, and seamlessly generate model cards and data sheets – a critical requirement for compliance reporting.

  3. Assess your risk – to identify potential risks, organizations should conduct risk assessments of their AI systems. This includes evaluating and documenting the impact of AI on privacy, security, and fairness. Companies should also consider the potential consequences of AI failures and how to mitigate those risks. Additionally, organizations should also implement third-party risk & conformity assessments to demonstrate that your AI systems and organizational practices fulfill the requirements of agreed-upon responsible AI regulations, laws, best practices, specifications, and standards.

  4. Build stakeholder feedback loops – AI regulations such as the EU AI Act require that organizations continuously monitor and collect feedback from affected stakeholders on how the AI is impacting them. This would involve leveraging an online tool where you can quickly detect and respond to any reported incidents or unintended consequences of the system. This regulatory requirement aligns with responsible organizational efforts to enhance transparency, explainability, accountability, and trust in the use of your AI.


AI products will continue to evolve over the coming months and years. New interfaces, models, and applications will be released and your organization’s ability to leverage these technologies for breakthrough innovations will largely depend on your regulatory risk management practices. By taking these steps, organizations can build the infrastructure necessary to mitigate regulatory risk, build trust with consumers and stakeholders, and stay competitive in the evolving AI landscape.



Apr 24, 2023

4 min read

0

168

0

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page