Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

AI Safety Institute Launches AI Model Safety Testing Tool Platform 

Business developers can use Inspect to test AI models before public release 

The U.K.’s AI Safety Institute has launched a new platform allowing businesses to test their AI models before launching them publicly. 

The platform, named Inspect, is a software library designed to asses AI model capabilities, scoring them on areas like reasoning and autonomous abilities. 

There’s an absence of safety testing tools available to developers today. MLCommons unveiled a large language model-focused benchmark for safety testing last month. 

Inspect was built to fill the gap, launching in open source so anyone can use it to test their AI models. 

Businesses can use Inspect to evaluate prompt engineering for their AI models and external tool usage. The tool also contains evaluation datasets containing labeled samples so developers can examine in detail the data being used to test the model. 

It’s designed to be easy to use, with explainers for running the various tests provided throughout, including if a model is hosted in a cloud environment like AWS Bedrock. 

The decision to open source the testing tool would enable developers worldwide to conduct more effective AI evaluations, according to the Safety Institute. 

“As part of the constant drumbeat of U.K. leadership on AI safety, I have cleared the AI Safety Institute’s testing platform to be open sourced,” said Michelle Donelan, U.K. technology secretary. “The reason I am so passionate about this and why I have open sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks of AI.” 

“The ability of Inspect to evaluate a wide range of AI capabilities and provide a safety score empowers organizations, big and small, to not only harness AI’s potential but also ensure it is used responsibly and safely,” said Veera Siivonen, Saidot’s chief commercial officer. “This is a step towards democratizing AI safety, a move that will undoubtedly drive innovation while safeguarding against the risks associated with advanced AI systems.” 

 

Content Request – AI Business