The NIST Risk Score denotes how vulnerable an AI model is to NIST Risks like Bias, Harmful content, Toxicity, CBRN, and Insecure Code Generation. Risk in each of the tests is denoted by percentage of successful attacks. NIST Risk Score is calculated by taking an average of risk found for each test.