

Commercial developers can use Inspect to test AI models before they are released to the public.
This article originally appeared on AI Business
. The UK’s AI Safety Institute has launched a new platform that allows companies to test their AI models before releasing them to the public.
The platform, called Inspect, is a software library designed to assess the capabilities of AI models and score them in areas such as reasoning and autonomous capabilities. There are not enough security testing tools available to developers today. Last month, MLCommons introduced a great language model-focused benchmark for security
Commercial developers can use Inspect to test AI models before they are released to the public.
. The UK’s AI Safety Institute has launched a new platform that allows companies to test their AI models before releasing them to the public.
The platform, called Inspect, is a software library designed to assess the capabilities of AI models and score them in areas such as reasoning and autonomous capabilities. There are not enough security testing tools available to developers today. Last month, MLCommons introduced a great language model-focused benchmark for security testing. Related: Ampere Introduces 256-Core Processor in Power Play Data Center
and has been released open source so anyone can use it to test their AI models.
Companies can use Inspect to evaluate the rapid design of their AI models and the use of external tools. The tool also contains evaluation datasets that contain labeled samples so developers can carefully examine the data used to test the model.
It is designed to be easy to use, with explanations of how to run the various tests provided at all times, even if a model is hosted in a cloud environment like AWS Bedrock. The Security Institute says the decision to open source the testing tool would enable developers around the world to conduct more effective AI assessments.
“As part of the UK leaders’ ongoing commitment to AI safety, I have authorised the AI Safety Institute’s testing platform to be open source,” said Michelle Donelan, the UK’s technology secretary. “The reason I’m so passionate about Inspect and why I’ve made it open source is because of the tremendous benefits we can get if we control the risks of AI.”
The Security Institute said it plans to develop open source testing tools beyond Inspect in the future. The agency will work on projects related to its U.S. counterpart after signing a joint working agreement in April. “Successful collaboration in AI safety testing is about having a shared and accessible approach to assessments, and we hope Inspect can be a critical building block for AI safety institutes, research organisations and academia,” said Ian Hogarth, president of the AI Safety Institute. “We look forward to seeing the global AI community use Inspect not only to conduct their own model safety tests, but also to help adapt and develop the open source platform so they can produce high-quality assessments at all levels.”
The success of the Security Institute’s new platform can only be measured by the number of companies that have already committed to using the testing tool, according to Amanda Brock, managing director of OpenUK. “With the UK slow to regulate, this platform simply has to succeed for the UK to have a place in the future of AI,” Brock said. “All eyes will now be on South Korea and the upcoming security summit to see how the world embraces it.”
“Inspect’s ability to assess a wide range of AI capabilities and provide a security score enables organisations, large and small, to not only harness the potential of AI but also ensure it is used responsibly and safely,” said Veera Siivonen, commercial director at Saidot. “This is a step toward democratizing AI safety, a move that will undoubtedly spur innovation while protecting against the risks associated with advanced AI systems.”