We are pleased to personally return Transform 2022 on July 19 and, in fact, on July 20-28. Join AI and data leaders for in-depth conversations and exciting networking opportunities. Register today!
Eric Horvitz, senior researcher at Microsoft testified on May 3 He told the U.S. Senate Armed Services Subcommittee on Cybersecurity that he was confident that organizations would face new challenges as cybersecurity attacks, including the use of artificial intelligence, increased.
Although artificial intelligence has improved the ability to detect cybersecurity threats, he explained that threat subjects also increase the antenna.
“Although little is known about its active use to date AI in cyber attacksIt is widely accepted that AI technologies can be used to expand cyber attacks through various forms of research and automation … referred to as offensive AI, ”he said.
However, it is not only the army that must prevent threatening actors who use artificial intelligence to expand their attacks and evade detection. As business companies struggle with an increasing number of large security breaches, they need to do so to prepare for increasingly sophisticated Cybercrime controlled by artificial intelligenceexperts say.
Attackers want to make a big leap forward with artificial intelligence
“We haven’t seen a ‘big bang’ where Terminator cyber AI has been launched and sabotaged everywhere, but the attackers are preparing this battlefield,” said Max Heinenmeyer, AI’s vice president of cybersecurity for cyber innovation. Darktrace, said VentureBeat. What we see now, he added, “is a big driver in cybersecurity – when attackers want to make a big leap, it will be a big disruptor with a mindset-changing attack.”
For example, there were non-AI control attacks such as the WannaCry ransomware attack in 2017, which he described as using new cyber weapons, even though there are malicious programs that are rarely seen today in the Ukraine-Russia war. before. “This kind of mind-altering attack is where we expect to see artificial intelligence,” he said.
So far, the use of artificial intelligence in the Ukrainian-Russian war remains limited into Russian use of deep fraud and Ukraine’s use of Clearview AI recognition of a controversial face software, at least public. But security experts are preparing for battle: A Darktrace query Last year, an increasing number of IT security leaders became concerned about the potential use of artificial intelligence by cybercriminals. Sixty percent of respondents said that people’s responses were declining as cyber-attacks accelerated, with almost all (96%) beginning to protect their companies from artificial intelligence-based threats – mainly e-mail, advanced spear-throwing and imitation threats.
“In the real world, there have been very few actual research findings related to machine learning or artificial intelligence attacks, but bad people are already using artificial intelligence,” he said. .
Threat actors are increasingly using machine learning to help with social engineering attacks. If they get a big, big data set of lots and lots of passwords, they can learn something about these passwords to make their passwords better cracked.
Machine learning algorithms will handle more spear phishing attacks or highly targeted, non-generic fake emails than in the past, he said. “Unfortunately, it’s harder to teach users against clicking on fraudulent messages with a spear,” he said.
Which businesses should really be concerned
According to Seth Siegel, North American leader in artificial intelligence consulting at Infosys, security experts may not think openly about threatening actors who use artificial intelligence, but they see more, faster attacks and feel an increase in the use of artificial intelligence on the horizon.
“I think they see it’s fast and furious there,” VentureBeat said. “The threat landscape is really more aggressive than it was three years ago, and it’s getting worse.”
However, he warned that organizations should be much more concerned spear shooting attacks. “The question really needs to be, how can companies deal with one of the biggest artificial intelligence risks, which is the inclusion of bad data in your machine learning models?” he said.
These efforts are not individual attackersbut from advanced national state hackers and criminal gangs.
“The problem is that they use the most accessible technology, the fastest technology, the most advanced technology, because they should not only be able to catch past crimes, but, frankly, they are large departments that are not equipped against them. Manage this bad level of acting, “he said.” Basically, you can’t bring a human tool to fight artificial intelligence. ”
4 ways to prepare for the future of AI cyberattacks
Experts say security professionals need to take a few important steps to prepare for the future of artificial intelligence cyberattacks:
Provide ongoing safety awareness training.
Nachreiner said the problem with spear phishing is that emails are personalized to look like real business messages, making them harder to block. “You need to be educated about security, so that users know how to wait for these emails and treat them with suspicion, even if they appear in a business context,” he said.
Use tools controlled by artificial intelligence.
Infosec should look at artificial intelligence as a key security strategy, Heinenmeyer said. “They should not expect to use artificial intelligence or think of it as just a cherry – they should anticipate and implement artificial intelligence,” he explained. “I don’t think they realize how important this is right now – but when threat actors start using more furious automation, and perhaps more destructive attacks on the West, you really want to have artificial intelligence.”
Think beyond the individual bad actors.
Companies need to distance their prospects from the individual bad actor, Siegel said. “They need to think more about hacking at the national state level, breaking criminal gangs, and have a defensive stance, and understand that this is just something they need to do on a daily basis.”
Have a proactive strategy.
Siegel said organizations should also make sure they are on top of security positions. “When patches are placed, you have to approach them with the level of critique they deserve,” he explained, “and you need to check your data and models to make sure you don’t include malicious information on the models.”
Siegel added that his organization includes cybersecurity professionals in information science groups and also trains information scientists in cybersecurity techniques.
The future of offensive AI
According to Nachreiner, more “rival” machine learning falls below the pike.
“It’s about how we use machine learning for defense – people will use it against us,” he said.
For example, one of the ways organizations use artificial intelligence and machine learning today is to proactively capture malware – because malware is changing rapidly and signature-based malware detection no longer routinely captures malware. However, in the future, these ML models will be more sensitive to attacks by threat actors.
According to Heinenmeyer, the artificial landscape of artificial intelligence will continue to deteriorate, with increasing geopolitical tensions contributing to this trend. O, a latest research From Georgetown University, which explores how China and their artificial intelligence research universities combine state-sponsored hacking. “It says a lot about how closely the Chinese work with academics and universities, like other governments, and artificial intelligence research to use it for potential cyber operations for hacking.”
“As I think about this research and other things happening, I think that in a year’s time, my view of the dangers will be darker than it is today,” he said. However, he noted that the defense outlook will also improve as more organizations adopt AI. “We’re still stuck in this cat-and-mouse game,” he said.
VentureBeat’s mission is to be a digital urban space for technical decision makers to gain knowledge about transformative enterprise technology and operations. Learn more about membership.