
Blog
AI Powered Cyber Threats
3rd April, 2025
Written by: Kyle M., member of the Onca Technologies Team
In a short period of time, AI has taken the world by storm and has quickly become a necessary tool to making people’s lives more efficient. For all the promising opportunities that AI has to offer, it is also being exploited by cyber criminals for nefarious purposes. In this blog, we’ll explore five ways in which AI is being used to enhance cyber threats and provide actionable tips to best protect yourself against AI-powered attacks.
Adaptive Malware
Previously, the process of developing new malware was much slower than it is today, allowing traditional anti-virus (AV) software to effectively detect and neutralise threats before they caused significant damage. However, AI has enabled malware to adapt dynamically, meaning hundreds of thousands of small modifications to the code of a known cyber threat can be made rapidly to evade the signature-based detection of your traditional AV, making traditional AV alone near obsolete.
According to the AV-TEST Institute, over 450,000 new malware and potentially unwanted applications (PUA) are registered daily. With new variations in malware occurring at an unprecedented rate, zero-trust cyber security software like AppGuard is needed within endpoints to protect against zero-day attacks. To find out more, refer to our previous blog on zero-trust cyber security.
AI Vulnerability Detection
Cyber criminals actively seek vulnerabilities in software to exploit before they are patched. AI has accelerated the process of finding zero-day vulnerabilities, far exceeding traditional search methods. Because of this, software developers are increasingly burdened with the pressure of outpacing cyber criminals to safeguard users.
Brute force attacks Password
A brute force attack is a method of hacking that involves systematically guessing passwords, login credentials, and encryption keys through trial and error. Traditionally, the rudimentary nature of this attack mode meant that they were able to be prevented by conventional cyber security measures, such as account lockouts and strong passwords, but AI has dramatically improved the strategy behind these attacks, making them harder to combat.
Cyber criminals now use AI to analyse leaked credentials from data breaches to identify common patterns in password formation. From this, a list of common, likely passwords can be formed, as well as more specific passwords using online information about a target (birthdays, names of pets, interests).
AI can also be used for credential stuffing, whereby if an old password is found on the dark web, it can generate likely alternative passwords. This is why it is no longer recommended to change your password every three months, as people have a tendency of minimally changing their passwords each time for the sake of memorisation, making it easy for criminals to access accounts that are not MFA secured.
Phishing Attacks
Generative AI tools such as ChatGPT have been used to enhance the professionalism and persuasiveness of phishing emails. Not only this, but AI is often used to effectively imitate the writing style of a target by scraping their online activity, making the social-engineering scams more convincing.
Moreover, AI now automates the entire phishing process, from sourcing and targeting victims to crafting and distributing deceptive messages at scale, making attempts more difficult to identify and prevent.
Deep fake social engineering attacks
Cyber criminals are now using deepfake videos and audio to deceive individuals into transferring money or revealing sensitive information.
Recently, a finance worker at the multi-national engineering firm Arup transferred over £20 million (equivalent in HKD) after falling victim to a deepfake video call. A brief from the Hong Kong police revealed that the victim was suspicious of email communication from an individual purporting to be the company’s UK chief financial officer but was convinced of their legitimacy after a video conference call in which all attendees were deepfakes. Such incidents underscore the need to foster a culture of caution within the workplace, especially as remote working arrangements increase in popularity.
What can you do to protect yourself from AI powered cyber-attacks?
1. Strengthen Authentication & Access Control
- Use Multi-Factor Authentication (MFA) on all your accounts to prevent unauthorised access, even if your passwords are compromised.
- Use Strong, Unique Passwords. We as human beings are notoriously poor at creating unique passwords, as we need them to be memorable. Using a password manager to create and store complex passwords, such as 1Password, is essential.
2. Stay Updated & Patch Regularly
- Update Software & Firmware to fix vulnerabilities that AI-driven attacks exploit.
- Enable Auto-Updates to reduce the risk of missing critical security patches.
3. Secure Your Network & Devices
- Use AppGuard: AI-driven attacks can bypass weak defences, so invest in advanced access-control endpoint protection.
4. Detect & Defend Against Phishing
- Verify Email Sources: AI-powered phishing scams mimic real communications—always check sender details and avoid clicking unknown links.
- Use AI-Powered Email Security Tools: Microsoft Defender will help detect phishing threats.
5. Prevent Deepfake & Social Engineering Attacks
- Be Sceptical of Unusual Requests: AI can generate convincing fake voices and videos. Verify sensitive requests via secondary communication channels.
- Limit Personal Info Online: Attackers use AI to scrape data and create personalised attacks.
7. Train & Educate Yourself
- Stay Informed on AI Threats: Cybercriminals evolve their methods constantly. Keep up to date with us on LinkedIn to continue learning about the latest cyber threats.
- Conduct Regular Security Drills: If possible, test your organisation’s ability to recognise phishing, social engineering, and other AI-powered threats regularly. The harder and less expected they are, the better (cyber criminals won’t go easy on you, after all!)
If you need support, Onca Technologies is always ready to help safeguard your organisation from cyber threats. Our Digital Risk Protection (DRP) service is designed to assess your organisation’s vulnerabilities, alert you of leaked credentials on the dark web, and conduct a cyber maturity assessment to identify ways in which your organisation can improve its cyber security posture.
No matter the concern, we’ve got your back.