Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
As the speed and scale of AI innovations and the associated risks grow, AI research firm Anthropic calls for $15 million in funding for the National Institute of Standards and Technology (NIST) in support of its AI measurement and standards efforts.
Anthropic published one call to action memo yesterday, two days after a budget hearing around 2024 funding from the US Department of Commerce that included bipartisan support for maintaining US leadership in the development of critical technologies. NIST, an agency of the US Department of Commerce, has been working for years to measure AI systems and develop technical standards, including the Face Recognition Vendor Test and the recent AI Risk Management Framework.
The memo said an increase in federal funding for NIST “is one of the best ways to channel that support … so that it is well-placed to carry out its work promoting safe technology innovation.”
A ‘shovel-ready’ AI risk approach
While other ambitious proposals have recently been made — calls for an “international agency” for artificial intelligence, legislative proposals for an AI ‘Regulatory regime,” and, of course, an open letter to temporarily “pause” AI development — Anthropic’s memo said the call for NIST funding is a simpler, “shovel-ready” idea available to policymakers.
“Here’s something we could do today that doesn’t require anything too wild,” Anthropic co-founder Jack Clark said in an interview with VentureBeat. Clark, who has been active in AI policy work for years (including a stint at OpenAI), added that “this is the year to be ambitious about this funding because this is the year most policymakers have woken up to AI and contribute ideas.”
The clock is ticking when dealing with AI risk
Clark admitted that for a company like Google-funded Anthropic, one of the top companies building large language models (LLMs), proposing this kind of measure is “a bit weird.”
“It’s not that typical, so I think this implicitly shows that the clock is ticking” when it comes to addressing AI risk, he explained. But it’s also an experiment, he added: “We’re publishing the memo because I want to see what the response is, both in DC and more broadly, because I hope it will convince other companies and academics and others to spend more time publishing things like this.”
If NIST is funded, he noted, “we’ll get more solid measurement and evaluation work in a place that naturally brings together government, academia, and industry.” On the other hand, if unfunded, more evaluation and measurement “would be driven solely by industry actors because they are the ones spending the money. The AI conversation flows better with more people at the table, and this is just a logical way to get more people at the table.”
The Downsides of Industrial Capture in AI
It is noteworthy that Anthropic is seeking billions to take on OpenAI, and it was famous tied to the collapse of Sam Bankman-Fried’s crypto empire, Clark talk about the disadvantages of ‘industrial capture’.
“Over the past decade, AI research has evolved from a mostly academic exercise to an industrial one, if you look at where the money is spent,” he said. “This means that many systems that cost a lot of money are driven by this minority of actors, who are mostly in the private sector.”
Two key ways to improve that is to create government infrastructure that gives government and academia a way to train systems at the border and build and understand them themselves, Clark explained. “In addition, you can have more people develop the measurement and evaluation systems to try to take a good look at what is happening at the border and test the models.”
A society-wide conversation that policymakers should prioritize
As the chatter increases about the risks of huge datasets training popular large language models like ChatGPT, Clark said research into AI systems’ performance behavior, interpretability and what the level of transparency should look like is important. “One hope I have is that a place like NIST can help us create a kind of gold-standard public datasets that will eventually be used by everyone as part of the system or as inputs into the system,” he said.
Overall, Clark said he got into AI policy work because he saw its growing importance as a “gigantic society-wide conversation.”
When it comes to working with policy makers, he added, it’s mostly about understanding the questions they have and trying to be helpful.
“The questions are things like ‘Where does the US rank with China on AI systems?’ or ‘What is fairness in the context of generative AI text systems?'” he said. “Just try to meet them where they are and answer that question, then use it to talk about broader issues – I really think people get a lot more knowledge about this area very quickly.”
VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.