Ahead of the AI safety summit commencing in Seoul, South Korea later this week, the United Kingdom is intensifying its efforts in the field. The AI Safety Institute, established in November 2023 with the ambitious goal of assessing and mitigating risks in AI platforms, has announced the opening of a second location in San Francisco.

This strategic move positions the Institute closer to the heart of AI development, with the Bay Area being home to major players such as OpenAI, Anthropic, Google, and Meta.

Foundational models are crucial for generative AI services and various applications. Despite the UK signing a Memorandum of Understanding (MOU) with the US to collaborate on AI safety initiatives, the UK is investing in establishing a direct presence in the US to address these issues.

“Having personnel in San Francisco will provide access to the headquarters of numerous AI companies,” stated Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology, in an interview with TechCrunch. “While many of these companies have bases in the UK, it is beneficial to have a base in San Francisco to access additional talent and collaborate more closely with the United States.”

Being closer to the epicenter of AI development is advantageous for the UK not only to understand current innovations but also to increase visibility with these firms. AI and technology are viewed by the UK as significant opportunities for economic growth and investment.

The timing of this expansion is particularly relevant given the recent developments at OpenAI regarding its Superalignment team.

Currently, the AI Safety Institute is a relatively small organization with just 32 employees. This modest size contrasts with the massive investments in AI technology companies, which have strong economic incentives to rapidly deploy their technologies.

One of the Institute’s significant achievements is the release of Inspect, a set of tools for testing the safety of foundational AI models, earlier this month. Donelan described this release as a “phase one” effort. Although benchmarking AI models has proven challenging, the engagement remains voluntary and inconsistent. Companies are not legally required to have their models vetted pre-release, which means potential risks might not be identified until after deployment.

Donelan indicated that the AI Safety Institute is still refining its evaluation process. “Our evaluation process is an emerging science,” she said. “With every evaluation, we will improve and refine it further.”

At the upcoming summit in Seoul, Donelan aims to present Inspect to regulators, encouraging them to adopt the tool for their evaluations.

“We now have an evaluation system. Phase two must focus on ensuring AI safety across society,” she added.

Looking ahead, Donelan anticipates that the UK will develop more AI legislation. However, consistent with Prime Minister Rishi Sunak’s stance, the UK will refrain from legislating until it fully understands the scope of AI risks. “We do not believe in legislating prematurely,” she said, emphasizing the need for comprehensive research to inform legislative efforts.

Ian Hogarth, chair of the AI Safety Institute, reiterated the importance of international collaboration in AI safety. “Since the Institute’s inception, we have prioritized an international approach to AI safety, sharing research, and working collaboratively to test models and anticipate risks,” he stated. “Today marks a pivotal moment, allowing us to scale our operations in a tech-rich area, complementing the expertise of our staff in London.”