The Role of AI in Cybersecurity Resilience: Opportunities and Threats

July 10, 2023
AI and Cyber Security

Wherever you turn on the TV or radio, browse news sites, or engage in workplace conversations, there’s a lot of buzz about OpenAI’s ChatGPT. And rightfully so. The phase we are in now feels akin to when the browser became widely available around 1997/1998. However, I also see a downside. This downside isn’t about professions that deny or fail to adapt quickly enough, but about your data security. This concern is supported by recent research from Europol that I’d like to share and elaborate on.

The World of Cyber:

In the world of cyber, many rapid stories circulate. The word AI has become such a hype that it seems every digital security solution now claims to have AI onboard. So, ask the right questions. It’s about an important subject: protecting the cores of large computer networks and thus securing data. You should hope that good contracts are made with suppliers regarding data security and data processing. Additionally, the competencies and reliability of the suppliers’ employees should be continuously measured.

This is no unnecessary luxury, as evidenced by numerous media reports confirming that organizations are often victims of data breaches due to suppliers. A recent example involves the Dutch Railways (NS), where a supplier’s supplier was allegedly hacked, potentially compromising NS passengers’ data.

Widespread Use of ChatGPT

The release and widespread use of ChatGPT, a large language model (LLM) developed by OpenAI, has garnered much attention worldwide. This is mainly due to its ability to quickly provide ready-made answers applicable to a wide range of contexts. Unlike Google, where you have to sift through fragmented information, ChatGPT seems to be the solution to all your questions.

While this offers great opportunities for legitimate businesses, there is also a downside that Europol is now seriously warning about. Criminals and state actors could use LLMs for their own purposes.

Cybercriminals and AI

A growing number of cybercriminals are now attempting to misuse AI-based chatbots to develop malware and other malicious tools. Europol has expressed concern about the potential for cybercriminals to exploit various techniques to bypass the safety features OpenAI has implemented to prevent the generation of harmful content. This is a worrying trend. The Europol report highlights several examples of how ChatGPT can be used maliciously, such as optimizing phishing campaigns, impersonating others in online conversations, and generating harmful code.

Why a Unified Vision on Cybersecurity Matters

It’s crucial that all employees have a unified vision on cybersecurity. Why? Because cybersecurity is not just an IT issue; it’s an organizational issue. When everyone understands the importance of cybersecurity, they are more likely to recognize threats and take necessary precautions. This collective vigilance significantly enhances the organization’s defense against cyber threats.

Europol Innovation Lab

In response to the growing public attention on ChatGPT (the platform already has over 100 million active users), Europol selected the LLM ChatGPT for examination in workshops.

Although ChatGPT is a success for users, it also makes the platform a lucrative target for cybercriminals. In short, the goal was to investigate how criminals could misuse LLMs (such as ChatGPT) and how it could assist researchers in their daily work.

The experts who participated in the workshops represented the full spectrum of Europol’s expertise, including operations, analysis, organized crime, cybercrime, counterterrorism, and information technology.

Examples of Misuse

To prevent malicious use of ChatGPT, OpenAI has implemented various safety features. However, the Europol report emphasizes that criminals, despite these precautions, can quickly find techniques to bypass content moderation restrictions.

There are countless examples of how ChatGPT could be used by cybercriminals (or is already being used). Here are a few:

1. Optimizing Phishing Campaigns: Previously poor language can now be spot-on in any language, aimed at conversion. Phishing emails will be very convincing, enticing victims to give up their login credentials or other sensitive information.

2. Impersonating Others in Online Conversations: Attackers can use a language model to create text that appears to have been generated by a trusted person or entity, such as a bank representative or government official.

3. Increase in CEO Fraud: By intelligently using the tool, deceiving managers of organizations will become child’s play.

4. Formal and Grammatically Correct Texts: People with minimal English proficiency will be able to produce formal and grammatically correct texts in many languages, useful for sending traditional mail within certain countries. For example, a letter from a bank asking you to cut up and send in your bank card, promising a new one in return.

5. Generating Harmful Code: Creating potentially harmful code that can be used to penetrate organizations’ networks.

Linking AI to Cybersecurity Resilience

AI, particularly through tools like ChatGPT, has the potential to revolutionize cybersecurity. However, it also poses significant risks if misused. Therefore, organizations must adopt a dual approach: leveraging AI for enhanced security measures while also implementing robust defenses against AI-driven threats.

AI for Enhanced Security: AI can analyze vast amounts of data quickly, identify patterns, and detect anomalies that could signify a cyber attack. Automated systems can respond to threats faster than humans, reducing the risk of breaches.

Defense Against AI-Driven Threats: Organizations need to stay ahead of cybercriminals who use AI to their advantage. This involves continuous monitoring, updating security protocols, and educating employees about the latest threats and prevention techniques.

Conclusion

The integration of AI into cybersecurity presents both opportunities and challenges. While tools like ChatGPT can enhance security measures, they also create new vulnerabilities that cybercriminals are eager to exploit. By understanding these risks and implementing comprehensive security strategies, organizations can leverage AI’s benefits while protecting against its potential threats. The recent findings from Europol underscore the importance of vigilance and proactive measures in this evolving landscape. And, importantly, ensuring that every employee has a vision on cybersecurity is crucial for building a resilient and secure organization.

Interested in this Europol paper?

If you are interested in the whitepaper, please let me know.

What Cybersecurity.vision offers

For more information on how we can help your organization, visit our services offerings at Cybersecurity.vision.

Copyright by Cybersecurity.vision, The Netherlands. All rights reserved.

Open WhatsApp
1
Welcome at Cybersecurity.vision. Please click on 'Open WhatsApp' to send a message