default banner
Go back to Blog posts

Recent advances in Artificial Intelligence (AI) are being exploited by cybercriminals to create new sophisticated attacks. AI enables faster, stealthier, automated attacks at extraordinary scale: this new trend is known as Offensive AI.

 

How to use AI to develop attacks and security analysis?

Artificial Intelligence is a powerful tool and attackers use it to perpetrate attacks in various ways and purposes: from data theft, detection evasion or vulnerability discovery to social engineering attacks.

A notorious and threatening example of Offensive AI is the usage of the deepfake technology for phishing attacks. Deepfakes are fake images or sounds (more generally, medias) created via deep learning to impersonate a victim, originally by face swapping [1]. Using deepfakes, attackers can perpetrate phishing attacks, for example by mimicking CEOs to extort money or sensitive information from employees. Several incidents of this type occurred in the past few years, in particular attacks using voice cloning.[2,3] Deepfake phishing and other social engineering attacks are considered one of the most serious threats of Offensive AI, both by academic and industrial experts [4].

Another concerning threat is the efficiency of AI and Machine Learning (ML) to extract information from black box systems: in other words, AI for reverse engineering (RE). Offensive AI accelerates the long and difficult reverse engineering process, by helping to analyze the internals of compiled code [5]. Here, Offensive AI becomes a threat to intellectual property and privacy and may also be used for discovering vulnerabilities allowing attackers to build exploits faster. While most of the attention has been given to software application reverse engineering, AI have been applied to IC systems netlist reverse engineering to bypass anti RE countermeasures like obfuscation [6].

Side channel attacks and evaluation (SCA) are another example of attacks where Artificial Intelligence is leveraged on to bypass defences. In those attacks, deep neural networks are used to bypass countermeasures and extract private keys from physical emanation leaking from crypto engines. In this context, AI also enables faster and more automated attacks [7].

Overall, Offensive AI enables new levels of attack automation. Attackers are now capable to conduct “push-button” phishing campaigns at large scale, for example by automatically crafting malicious emails and websites based on victims’ data, and by automating the launch and the different steps of the attack. Thereby, AI allows to scale up the attack coverage, to reduce the necessary human interactions and finally to increase the success rate of attackers.

 

The security challenges of responding to AI attacks.

The efficiency of Offensive AI makes it difficult for organizations to protect from it. Firstly, human controlled detection systems can’t keep up with the pace and scale of these attacks when they occur. The response capacity of the defence teams, in term of volume and speed, can be overwhelmed by large campaigns of attacks. Secondly, traditional rule or signature-based detection tools can detect known attacks but are inefficient against new or unseen attacks.

Lastly, some of the AI attacks, especially the ones relating to social engineering like email phishing, are very concerning because they rely on human manipulation and are not entirely in the scope of cyber defence systems usually deployed in the industry.

 

How to respond to offensive Artificial Intelligence?

While humans and traditional defence structures are struggling to keep up with these new threats, new protections are emerging, also leveraging on the power of AI: this is Defensive AI. Indeed, if AI can be used as a tool to launch faster and more dangerous attacks, it can also harden defences and speed up the responses from security teams in case of threat.

Using AI, it is possible to create accurate and reliable detection systems, which can be designed to spot attack attempts, intrusions, or anomalies. This is useful in several cybersecurity scenarios: fraud detection, ransomware identification, physical attack detection, etc. Moreover, some recent AI detection methods rely on Machine Learning algorithms to model normal device operation, which allows to detect unusual suspicious behaviour and prevent new and unknown attacks.

Another great benefit from AI is that big data tools and solutions can collect, ingest, and analyze vast quantities of information in a very short time. Access to past data, combined with advanced analytics technics makes it possible to detect threats and to identify and respond to incidents with unmatched speed and accuracy.

To learn more about this topic, follow our upcoming blog articles where we will explore in depth the defensive AI applications and challenges.

 

How Secure-IC addresses offensive AI?

Secure-IC offers advanced attack detection systems, both at hardware and software level to deal with advanced attacks. The SecuryzrTM integrated Security Service Platform (iSSP) implements Defensive AI-based protections capable of detecting unseen attacks.

This solution includes an Intrusion Detection System (IDS) based on AI, which can be deployed on devices like autonomous vehicles ECUs or smartphones, and compatible with several communication interfaces: CAN bus, Ethernet, custom sensors, etc. The embedded IDS uses Machine Learning to analyze real-time traffic and generates alerts when detecting attacks and anomalies through suspiciously unusual activity.

The alerts, along with other telemetry and security information, are then sent to a remote server through a secure channel, for example to a SOC (Security Operation Center) There, the data from fleets of devices can be aggregated and correlated using advanced AI Analytic, allowing the SOC team to have a global overview of the security state of all devices, to perform further analysis and to take appropriate response quickly.

On the security evaluation side, the LaboryzrTM platform has the capability to perform AI-enhanced side channel attacks to ensure the highest level of security for embedded systems, at post silicon and pre silicon stages and at software levels.

For more information on this topic, we invite you to read the complete publication of the Security Science Factory (SSF)

Do you have questions on this topic and on our protection solutions? We are here to help.
Contact us

 


References

[1] Yisroel Mirsky and Wenke Lee. 2021. The creation and detection of deepfakes: A survey. ACM Computing Surveys

(CSUR) 54, 1 (2021), 1–41.

[2] https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=390b54575591

[3] https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

[4] Mirsky, Yisroel, et al. “The threat of offensive ai to organizations.” Computers & Security (2022): 103006.

[5] Shin, Eui Chul Richard, Dawn Song, and Reza Moazzezi. “Recognizing functions in binaries with neural networks.” 24th USENIX security symposium (USENIX Security 15). 2015.

[6] Azriel, Leonid, et al. “A survey of algorithmic methods in IC reverse engineering.” Journal of Cryptographic Engineering 11.3 (2021): 299-315.

[7] Perianin, T., Carré, S., Dyseryn, V., Facon, A., & Guilley, S. (2021). End-to-end automated cache-timing attack driven by machine learning. Journal of Cryptographic Engineering, 11(2), 135-146.

Go back to Blog posts
Contact