April 15, 2024
A.I

Is artificial intelligence the solution to cybersecurity threats?

Artificial intelligence technology has been a cybersecurity buzzword for a decade, cited as a way to pinpoint vulnerabilities and recognize threats by recognizing patterns in large amounts of data. Antivirus products, for example, have long used AI to search for malicious code or malware and send alerts in real time.

But the advent of generative AI, which allows computers to generate complex content (such as text, audio and video) from simple human input, offers more opportunities for cyber defenders. Its proponents promise that it will increase efficiency in cybersecurity, help defenders launch a real-time response to threats, and even help them completely outperform their adversaries.

“Security teams have been using AI to detect vulnerabilities and generate threat alerts for years, but generative AI takes this to the next level,” says Sam King, CEO of security group Veracode.

“We can now use technology not only to detect problems, but also to solve them and ultimately prevent them in the first place.”

Generative AI technology first came to light with the launch of OpenAI’s ChatGPT, a consumer chatbot that responds to user questions and suggestions. Unlike previous technology, generative AI “has adaptive learning speed, contextual understanding and multimodal data processing, and gets rid of the more rigid, rule-based layer of traditional AI, boosting its security capabilities,” he explains. Andy Thompson, offensive research evangelist. at CyberArk laboratories.

So after a year of hype around generative AI, are these promises being kept?

Generative AI is already being used to create specific models, chatbots, or AI assistants that can help human analysts detect and respond to attacks, similar to ChatGPT, but for cybersecurity. Microsoft has launched one such effort, which it calls Security Copilot, while Google has a model called SEC Pub.

“By training the model with all of our threat data, all of our security best practices, all of our knowledge of how to create secure software and secure configurations, we already have customers using it to increase their ability to analyze attacks and malware to create automated software. . defenses,” says Phil Venables, chief information security officer at Google Cloud.

And there are many more specific use cases, experts say. For example, the technology can be used to simulate attacks or to ensure that a company’s code remains secure. Veracode’s King says: “You can now take a GenAI model and train it to automatically recommend fixes for insecure code, generate training materials for your security teams, and identify mitigation measures in the event of an identified threat, going beyond simply finding vulnerabilities. “.

Generative AI can also be used to “generate [and] synthesize data” with which to train machine learning models, says Gang Wang, associate professor of computer science at the University of Illinois Grainger College of Engineering. “This is particularly useful for security tasks where data is sparse or lacks diversity,” he notes.

The potential to develop AI cybersecurity systems is now driving deals in the cyber sector, such as Cisco’s September acquisition of US security software maker Splunk for $28 billion. “This acquisition reflects a broader trend and illustrates the industry’s growing adoption of AI to enhance cyber defenses,” says King.

He notes that these deals allow the acquirer to quickly expand its AI capabilities while giving it access to more data to effectively train its AI models.

However, Wang cautions that AI-driven cybersecurity cannot “completely replace existing traditional methods.” To be successful, “different approaches complement each other to provide a more complete view of cyber threats and offer protections from different perspectives,” she says.

For example, AI tools may have high false positive rates, meaning they are not accurate enough to be trusted on their own. While they can quickly identify and stop known attacks, they may face novel threats, such as so-called “zero-day” attacks, that are different from those launched in the past.

As enthusiasm for AI continues to sweep the tech sector, cyber professionals must implement it carefully, experts warn, maintaining standards around privacy and data protection, for example. According to data from Netskope Threat Labs, sensitive data is shared in a generative AI query every hour of the workday in large organizations, which could provide hackers with material to attack.

Steve Stone, director of Rubrik Zero Labs at data security group Rubrik, also points to the emergence of hacker-friendly generative AI chatbots, such as “FraudGPT” and “WormGPT,” which are designed to allow “even those with “minimal technical skills” can launch sophisticated cyberattacks.

Some hackers use AI tools to write and implement social engineering scams at scale and in a more targeted way (for example, by replicating a person’s writing style). According to Max Heinemeyer, chief product officer at Darktrace, a cybersecurity AI company, there was a 135 percent increase in “new social engineering attacks” from January to February 2023, following the introduction of ChatGPT.

“2024 will show how more advanced actors such as APTs [advanced persistent threats], nation-state attackers and advanced ransomware gangs have begun to embrace AI,” he says. “The effect will be even faster, more scalable, more personalized and contextualized attacks, with reduced dwell time.”

Despite this, many cyber experts remain optimistic that the technology will be a boon to cyber professionals in general. “Ultimately, it is the defenders who have the advantage, since we own the technology and can therefore direct its development with specific use cases in mind,” says Venables. “Essentially, we have home advantage and we intend to make the most of it.”

Video: Can generative AI live up to the hype? | FT Technology

Leave a Reply

Your email address will not be published. Required fields are marked *