Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
This month, we met with top government officials with top tech executives, including the CEOs of Alphabet and Microsoft, to discuss AI advancements and Washington’s involvement. But just as fast as the ChatGPT, Bard and other well-known generative AI models are advancing, US companies need to know that malicious actors represent the most successful hacking groups in the world and aggressive nation states build their own generative AI replicas – and they stop at nothing.
There is ample reason for experts to be concerned about the overwhelming speed at which generative AI could transform the tech industry, the medical industry, education, agriculture and almost every other industry, not just in America, but in the world. . Movies like The terminatorfor example, provide enough (fictitious) precedents to fear the effects of a runaway AI, fueling more realistic concerns, such as AI-induced mass layoffs.
But precisely because AI has the power to revolutionize society as we know it, America cannot afford a private or government-imposed pause in its development, and why this will undermine our ability to empower individuals and companies to defend against our enemies would paralyze. Because the development of AI is moving so fast, any delay that regulators put on that development would set us back exponentially compared to our adversaries who are also developing their own AI.
AI advances quickly, government regulates slowly
Regulators aren’t used to moving at the speed AI demands, and even if they were, there’s no guarantee it would make a difference in how we can use AI to successfully defend against adversaries. For example, lawmakers have spent decades trying to regulate and penalize America’s recreational drug trade, but criminals who market dangerous, illegal substances don’t play by those rules; they are criminals so they don’t care. The same behavior will occur with our geopolitical rivals, who will ignore any attempt by America to put guardrails around AI development.
In the past eight months, hackers have claimed to be developing or investing heavily in artificial intelligence, and researchers have already confirmed that attackers could turn to OpenAI’s tools to help them hack. How effective these methods are currently, and how advanced other countries’ AI tools are, doesn’t matter as long as we know they are developing them – and will certainly use them for malicious purposes. Because these attackers and nations are not abiding by any moratorium we are placing on AI development in America, our country cannot afford to interrupt our research or risk falling behind our adversaries in multiple ways.
In the cybersecurity space, we’ve always referred to our ability to create tools to thwart attackers’ exploits and scams as an arms race. But with AI as advanced as GPT-4 in the picture, the arms race has gone nuclear. Malicious actors can use artificial intelligence to find vulnerabilities and entry points and generate phishing messages that pull information from public company emails, LinkedIn and org charts, making them nearly identical to real emails or text messages.
On the other hand, cybersecurity companies looking to bolster their defensive prowess can use AI to easily identify patterns and anomalies in system access records, or create test code, or as a natural language interface for analysts to quickly gather information without programming.
What’s important to remember, though, is that both sides are developing their arsenal of AI-based tools as quickly as possible – and pausing that development would only sideline the good guys.
The need for speed
That’s not to say we should let private companies develop AI as a completely unregulated technology. When genetic engineering became a reality in healthcare, the federal government regulated it within America to enable more effective drugs, while acknowledging that other countries and independent adversaries could use it unethically or to do harm, for example by creating viruses .
I believe we can do the same for AI by acknowledging the need to create protections and standards for ethical use, but also understanding that our enemies will not follow those rules. To do this, our government and technology CEOs must operate quickly and without delay. We have to work at the pace of the current development of AI, or in other words, the speed of data.
Dan Schiappa is chief product officer at Arctic Wolf.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers