Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Since its launch in November 2022, ChatGPT, an artificial intelligence (AI) chatbot, has caused quite a stir for its software’s surprisingly human and accurate responses.
The auto-generative system reached a record 100 million monthly active users just two months after launch. While its popularity continues to grow, the current debate within the cybersecurity industry is whether this type of technology will help make the internet a safer place or play into the hands of those seeking to cause chaos.
AI software has several cybersecurity use cases, including advanced data analytics, automating repetitive tasks, and helping calculate risk scores. However, soon after its debut, it was quickly established that this easy-to-use, freely available chatbot could also help hackers infiltrate software and develop sophisticated phishing tools.
So, is ChatGPT a gift from the cybersecurity gods or a plague used to punish? To discover the answer, we need to look at the pros, cons and future. Let’s dive in.
What are the current dangers of ChatGPT?
Like any new technological advancement, there will always be some negative implications, and ChatGPT is no different.
Currently, the most talked about chatbot issue is the ease of creating highly convincing phishing texts, which are likely to be used in malicious emails. Due to the lack of security measures, it was easy for attackers, for example, whose first language is English, to use the ChatGPT mechanism to write an eloquent, seductive message with near-perfect grammar in seconds.
And since Americans lost $40 billion in 2022 to these scams, it’s easy to see why criminals would use ChatGPT to get their hands on a piece of this lucrative illicit pie.
AI-powered chatbots also raise the question of job security. Of course, the current system cannot replace a highly trained professional, but this technology can significantly reduce the number of logs and reports that must be inspected by an employee. This can impact the number of analysts a Security Operation Center (SOC) needs.
While the software offers several benefits to cybersecurity companies, there will be plenty of companies that will use the technology simply because of its current popularity and try to lure new customers with it. However, using the technology purely for its fad status can lead to abuse. Companies may not install adequate security measures, hindering progress in establishing an effective security program.
The Cybersecurity Benefits of ChatGPT
As with any new technology, disruption is an inevitable part, but that doesn’t have to be a bad thing.
Cybersecurity companies can add an extra layer of intelligence to their manual efforts to sift through audit logs or inspect network packets to distinguish threats from false alarms.
Due to ChatGPT’s ability to detect patterns and search within specific parameters, it can also be used for repetitive tasks and report generation. Cyber companies can then more intelligently calculate risk scores for threats affecting organizations by using ChatGPT as a super-powered research assistant.
For example, Orca Security, an Israel-based cybersecurity firm, has begun using ChatGPT’s superior analytics capabilities to plow through the ocean of data and help with security alerts. Recognizing early on how the chatbot can improve its day-to-day operation also allows the company to learn from the technology, giving it a unique advantage in customizing their models to optimize how ChatGPT works for its business.
What’s more, the chatbot’s natural language processing, which makes it so good at writing phishing emails, means it’s also perfect for crafting complicated security policies. These clear texts can be used on cybersecurity websites and training documents, saving valuable time for valued team members.
The future of ChatGPT
ChatGPT’s AI technology is readily available to most of the world. Therefore, like any other battle, it’s just a race to see which side will use technology better.
Cybersecurity companies will constantly have to combat nefarious users who devise ways to use ChatGPT to wreak havoc in ways cybersecurity companies have yet to fathom. And yet, this fact hasn’t deterred investors, and ChatGPT’s future looks bright. With Microsoft investments $10 billion in Open AI, it is clear that ChatGPT’s knowledge and skills will continue to grow.
For future versions of this technology, software developers should be aware of the lack of security measures, and the devil is in the details.
ChatGPT probably won’t be able to get around this problem to a large extent. It may have mechanisms to evaluate users’ habits and target individuals using obvious prompts such as “write me a phishing email as if I were someone’s boss”, or try to validate individuals’ identities .
Open AI could even work with researchers to train its datasets to evaluate when their text has been used in attacks elsewhere.
However, all of these ideas create a whole host of problems, including rising costs and data protection issues.
To tackle the current phishing epidemic, more people need education and awareness to recognize these attacks. And the industry needs more investment from mobile carriers and email providers to limit the number of attacks in the wild.
So many products and services will emerge from ChatGPT, bringing immense value to help protect businesses as they work to change the world. And there will also be plenty of new tools created by hackers that will allow them to attack more people in less time and in new ways.
AI-powered chatbots are here to stay, and ChatGPT has competition, with Google’s Bard and Microsoft’s Bing’s software looking to give Open AI’s creation a run for its money. Nevertheless, it is of paramount importance that cybersecurity companies view ChatGPT as both an offensive strategy and a defensive strategy, and are not enamored with the opportunity to simply generate more revenue.
Taylor Hersom is founder and CEO of Eden data.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers