PRA Framework

The PRA for AI Workbook

Workbook Overview

The PRA for AI workbook is a tool for conducting probabilistic risk assessments of AI systems. It offers an integrated environment for assessing potential risks from AI systems, including all components and protocols for selecting assessment complexity, executing the assessment, and generating a final risk report card.

The workbook tool was initially released in October 2024 as version 0.9.0-alpha and is under active development. The current v0.9.1-alpha release provides a functional foundation for conducting systematic risk assessments. The workbook includes structured support for scanning potential risks through an aspect-oriented taxonomy of AI Hazards, extensive guidance on modelling threats, generating risk scenarios and estimating levels of risk, as well as detailed instructions on documenting each step of the process. Users receive an auto-generated risk report card that presents their aggregate findings, together with an output log — a static version of the entry log that preserves risk scenarios, assumptions, and data for future reference and comparison.

The buttons above provide access to a copy of the workbook and a static rendering of the interactive user guide from the workbook, with pre-selected settings for the AML-120 team assessment protocol. For more information about the assessment settings, please refer to the complete user guide.

Quick Start Guide

Getting started with the PRA for AI workbook tool

Release Schedule

The PRA for AI workbook is under active development. Current releases and planned updates and features are documented below. Please note that these dates and features are subject to change.

Version Release Major Features
0.9.0-alpha 21st October 2024 Initial alpha release with basic functionality
0.9.1-alpha 19th November 2024 Updated assessment workflow and report card
0.9.2-alpha Expected January 2024 More detailed taxonomy with hazard clusters

Upcoming Features

  • Release of complete taxonomy levels including Hazard Clusters (TL3) and AI Hazards (TL4)
  • Capability Levels and Domain Knowledge Levels calibration tables
  • Further examples to guide workbook component usage
  • Detailed foundational model risk assessment case study

The development team is actively seeking user feedback to enhance and streamline the assessment workflow.