As the development and use of artificial intelligence continue to surge, so do the risks associated with it. In response, researchers from MIT and other institutions have unveiled the AI Risk Repository, an extensive database cataloging hundreds of documented risks tied to AI systems. This resource is designed to assist decision-makers in government, research, and industry in evaluating the evolving risks of AI.
Table of Contents
Organizing AI Risk Classification
Though many organizations and researchers have recognized the need to address AI risks, efforts to document and classify these risks have been fragmented.
“We began this project with the aim of understanding how organizations respond to AI risks,” said Peter Slattery, incoming postdoc at MIT FutureTech and project lead. “We wanted a complete overview of AI risks to use as a checklist, but existing classifications were incomplete.”
The AI Risk Repository consolidates data from 43 existing taxonomies, including peer-reviewed articles, conference papers, and reports, resulting in a database of more than 700 unique risks.
The repository uses a two-dimensional classification system. First, risks are categorized by their causes, considering the responsible entity (human or AI), intent (intentional or unintentional), and timing of the risk (pre-deployment or post-deployment). Second, risks are classified into seven distinct domains, such as discrimination, privacy, security, misinformation, and misuse.
The AI Risk Repository is publicly accessible and designed to be updated regularly with new risks, research findings, and trends.
Practical Tool for Organizations
The AI Risk Repository serves as a practical resource for organizations across various sectors. For those developing or deploying AI systems, it offers a valuable checklist for risk assessment and mitigation.
“Organizations using AI can benefit from this database as a foundation for comprehensively assessing their risk exposure,” the researchers stated. “The taxonomies may also help in identifying behaviors needed to mitigate specific risks.”
For instance, an organization developing an AI-powered hiring system can use the repository to identify potential risks related to discrimination and bias. Similarly, a company using AI for content moderation can explore the “Misinformation” domain to understand and address risks associated with AI-generated content.
The research team acknowledges that while the repository offers a solid foundation, organizations must customize their risk assessment strategies. However, having a centralized repository reduces the chance of overlooking critical risks.
“We anticipate the repository will become increasingly valuable to enterprises,” said Neil Thompson, head of the MIT FutureTech Lab. “In future phases, we plan to add new risks, seek expert reviews, and provide more detailed information about the most pressing risks.”
Influencing Future AI Risk Research
Beyond its practical use, the AI Risk Repository is also a significant resource for AI risk researchers. It provides a structured framework for synthesizing information, identifying research gaps, and guiding future studies.
“This database can serve as a foundation for more specific research,” Slattery noted. “Previously, researchers had to spend significant time reviewing scattered literature or rely on limited frameworks. Now, they have a more comprehensive database to work from.”
The research team plans to use the repository to identify gaps in how risks are being addressed and to ensure that all significant risks are adequately considered. They will continue updating the repository to keep it relevant for researchers, policymakers, and industry professionals focused on AI risks and mitigation.