With the blistering pace of artificial intelligence (AI) innovation we’re witnessing, it’s increasingly clear that many companies, organizations and even governments are ill-equipped to manage the impacts this new frontier will bring to their operations, workforces and cybersecurity. Large language models (LLMs) like GPT-4 have demonstrated an astonishing ability to synthesize vast amounts of data and produce articulate, creative and compelling content. But we know that not everyone will use the power of these tools for good. We need to keep a watchful eye on the dark side of AI.
According to Verizon’s 2022 Data Breach Investigations Report, the majority of cyberattacks (82%) rely on social engineering to manipulate a person into making a mistake. From inscrutable black box algorithms that produce misleading or manipulative content to deepfakes and other AI-powered engines of mass misinformation, the cyberthreats posed by AI in that sense are coming into sharper view. The rapid advances in AI are forcing companies to renew their emphasis on human intelligence — from the ability to identify cyberattacks in progress to new policies and practices around information verification, data sharing and other behaviors that can make companies vulnerable.
While there will undoubtedly be AI tools designed to identify deepfakes and check for content produced by LLMs like GPT-4 — as well as “digital watermarks” and other forms of authentication — companies shouldn’t throw up their hands in the face of the AI arms race. They should develop robust digital literacy and cybersecurity awareness training (CSAT) programs to ensure that their human workforces are in the best possible position to identify evolving cyberthreats, including those that rely on AI.
AI Presents Inherent Cybersecurity Challenges
AI often produces convincing but inaccurate content. As OpenAI admits: ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers. While GPT-4 is more advanced, it faces the same issue. For cybersecurity professionals, the keyword here is “plausible”— LLMs are generative programs designed to use massive quantities of data to predict which answers are most aligned with the prompts they receive. This means these programs don’t know when they’re wrong while their lack of human-level goal orientation allow cybercriminals to use them for their own purposes.
OpenAI is working to address the inaccuracies produced by GPT-4, but the same can’t be said for the creators of deepfakes, many of whom are trying to deceive people. Deepfakes are becoming more sophisticated and common, which is why government agencies and the private sector are scrambling to keep pace. The U.S. Defense Advanced Research Projects Agency (DARPA) is requesting tens of millions of dollars per year to develop algorithms capable of spotting deepfakes. Many companies are investing in deepfake identification as well.
There are innumerable ways bad actors can leverage these technologies to manipulate employees and infiltrate companies. Cybercriminals can use deepfakes to impersonate figures of authority within an organization and issue fraudulent instructions for account access, wire transfers and other forms of intrusion and theft. They can ask LLMs to generate convincing deceptive messages — from fake vendor requests to instructions that will lead employees to download malware. The harmful applications of AI will only continue to multiply, and employees have to be wary of this danger.
How AI Can Put Companies at Risk
There are many ways AI is already helping companies thwart and respond to cyberattacks – with detection systems for deepfakes and malware, identity authentication, risk assessments and more. Acumen anticipates that the market size for AI-powered cybersecurity will balloon from $14.9 billion in 2021 to almost $134 billion by 2030.
However, AI may prove to be less of a cybersecurity asset than a liability. When researchers used OpenAI’s GPT-3 (the precursor to ChatGPT and GPT-4) along with other AI services to craft phishing emails several years ago, they discovered that these messages were opened more frequently than the ones produced by humans. GPT-4 is a more capable platform, so it can create even more effective phishing content. For cybercriminals, one significant advantage of using AI is the ability to compose highly customized phishing emails at scale — something that would be impossible without the efficiency of LLMs and other AI platforms.
Phishing is integral to many cyberattacks because it convinces victims to either provide critical information directly or give cybercriminals access to secure networks. The fact that AI is able to produce unlimited phishing messages — which have already proven more than capable of fooling human beings — is a startling glimpse of what’s to come. This threat will only become more urgent over the next several years as AI becomes more advanced and cybercriminals continue to experiment with it.
Human Intelligence Is Vital for Keeping Your Company Safe
With the endless headlines about how AI will displace human beings in an ever-widening array of jobs, it may come as a surprise that a well-trained workforce is the most important cybersecurity asset companies possess. Although hackers are using AI to expose weaknesses in software security and deploy malware that’s difficult to detect, many of its use cases for cybercriminals still rely on the deception and coercion of human beings. Regardless of how sophisticated a phishing email happens to be, it’s still just an updated form of social engineering — a tactic cybercriminals have long relied upon.
The necessity of training against social engineering has never been clearer. Meanwhile, cybersecurity awareness training guidelines need to be updated — one key indicator of a phishing attack is the presence of sloppy writing, grammatical errors or broken English. If cybercriminals are able to produce high-quality prose at the click of a button, we need new detection methods. Similarly, as deepfake technology improves, old methods of information verification (such as phone calls) should be reconsidered.
No matter how advanced AI becomes, employees will remain the first line of defense against cyberattacks. This is why they should know all about the most pressing cyberthreats the company faces — including the growing threat posed by AI.