Discussions about cyber criminals using the powers of Open AI to generate malware have been the talk of the town ever since ChatGPT was launched. As predicted by experts, the checkpoint researchers have found some cyber attackers trying to bypass the ChatGPT safeguards. Despite the anti-abuse restrictions put in place by the AI chatbot creators, hackers have been quick enough to find their way around it.
The researchers at Check Point Software Technologies explored the dark web to find out any traces of malicious activity. They found several instances of cyber criminals using the powers of ChatGPT to improve their malware codes. The researchers then followed up their research by testing the abilities of ChatGPT in improving the malware codes.
The researchers concluded the research by claiming that although ChatGPT lacks coding complexities, it can most definitely help make the coding process easier for hackers and even improvise their codes if needed.
What are the Upsides of Open AI and When Can it Get Dangerous?
Even though artificial intelligence has existed in the cyber landscape for decades, its perception has transformed rapidly in this modern digital era. Today, AI enthusiasts are going gaga over algorithm-based machine learning, aka knowledge engineering. Open AI first came into existence in 2015 after its launch by the tech moguls Sam Altman and Elon Musk. Its basic idea was to generate an artificial intelligence system that possesses human-like abilities to reason and thereby apply its skills to solve unfamiliar situations.
According to Open AI founder Elon Musk, the platform is aimed to “develop artificial intelligence and apply it to [OpenAI’s] significant goals, such as bringing benefits to all humanity and not being bound by the requirements to generate financial profit.”
Pop culture is filled to the brim with depictions of the powers of artificial intelligence, but with the typical focus being the apocalyptic AI doom rather than its beneficial abilities. While there are plenty of reasons for us to feel uneasy about AI powers as we step foot into the future, human-like artificial intelligence can have staggering benefits for the human race as well. Let’s take a look at some of the sectors that can benefit immensely from the Open AI platform:
Even though this facet of AI is yet to be developed and explored, the potential for the Open AI platform to help accurately diagnose diseases and discover newer drugs hasn’t gone unnoticed. Furthermore, experts are also exploring the potential of AI in patient monitoring and analysis of hospital data.
Artificial intelligence truly holds the potential to take global education to new heights. With the penetration of AI in the education sector, it can become easier to simplify administrative tasks and provide a more personalized learning experience to both educators and students.
Autonomous cars like Tesla have become a fan favorite for quite some time now. The trend for self-driving cars has become a buzzword in the automobile industry. With the reduced risk of collisions on the road and better driving efficiency, it’s safe to say that such self-driven cars powered by artificial intelligence can truly change the face of transportation.
One of the most popular examples of artificial intelligence that we come across in our everyday lives is smart home assistants. From Ok Google to Alexa, tech moguls are working day and night to integrate artificial intelligence in the field of customer services. Not only do such integrations provide adequate solutions to the customers without any human interference, but also they transform the way customers interact with the company.
Even though artificial intelligence has been a breakthrough catalyst that will enable countless innovations in the future, its power needs to be handled with immense sensitivity and caution. Experts believe that artificial intelligence carries extreme risks that could endanger our existence. Some of the potential dangers of the Open AI platform include:
Due to the immense technological advancements in today’s modern AI era, you can track the actions of every person walking this earth with their online activity. Several identity-based algorithms are employed around us every day, making it easier for AI to identify who you are.
If artificial intelligence can be programmed to inflict harm on humanity, then this will be a sure-shot recipe for disaster. Experts have claimed that artificial intelligence can be used to design self-firing weapons, which could introduce the threat of autonomous weapons worldwide.
As mentioned earlier, artificial intelligence can extract a ton of information about you from the vast sources of data available in today’s world. What’s concerning is that AI can also use the same information against you. While machines functioning on artificial intelligence algorithms are designed to learn themselves, the humans behind such machinery carry their ideologies and biases.
How Can Hackers Abuse ChatGPT?
The attempts to point out the flaws in the Open AI-based ChatGPT model began in December 2022 when certain Reddit users started discussing ways to override the ChatGPT defenses. The researchers at Check Point have pointed out three ways in which cybercriminals use ChatGPT to generate malware.
On December 21, 2022, a cybercrime under the username USDoD announced that he could generate a script using ChatGPT, which could be used as a Ransomware model. What’s concerning here is that the platform allows even those with zero technical skills to generate malware that could significantly harm society. While ChatGPT isn’t independently capable of generating full-fledged Ransomware, it is easy for anybody to request an encrypted script with the use of a multi-layer encryption Python tool that can then be used to satisfy one’s malicious intentions.
In another report, Check Point researchers claimed that a hacker has also used ChatGPT to create a python-based information stealer code which the hacker could use to exfiltrate important files and documents. The same hacker also shared how a Java Code using PowerShell can be used to download a PuTTY software client and then operate multiple programs.
Experts have also claimed that ChatGPT has a significant role in the dark web marketplace and that the restrictions in its user interface are slightly weak. Cybercriminals can single-handedly circumvent these barriers by using certain phrases to bypass ChatGPT restrictions.
How Far Can Cyber Criminals Use ChatGPT?
The dark side of ChatGPT allows even unskilled cyber criminals to hack by creating greatly convincing phishing emails in their chosen target language. Let’s take a look at different ways in which ChatGPT can be hacked:
Teach Cyber Criminals to Exploit Your Vulnerabilities
ChatGPT can teach cybercriminals to take advantage of the TLS 1.1 vulnerabilities of a website. The platform provides a list of real-world exploits and valuable information about the tools needed to take advantage of them. This ability of ChatGPT opens doors of opportunities for unskilled hackers to expand the scope of their learning and become skilled hackers. Therefore, the platform makes hacking an attainable goal for individuals who have no way of otherwise learning the techniques needed to attack anybody’s online property.
Write Brand-New Malware:
If you provide ChatGPT with the right input request, then it can also go ahead and write an entire Ransomware code for you. This could result in a higher number of cyber attacks with an unprecedented rise in the potential negative effects of such attacks.
Generate Convincing Phishing Emails
Some of the telltale signs of a phishing email include – grammatical errors, punctuation mistakes, and spelling mistakes. But what if all these signs were corrected? ChatGPT can not only generate grammatically correct phishing emails but also use inputs from social media to refine them further. Additionally, cyber attackers can generate phishing emails in any language they like as long as ChatGPT supports it.
Social Engineering Attacks Practice
ChatGPT also enables cyber attackers to conduct more sophisticated cyber attacks by conducting intensive research. ChatGPT enables hackers to role-play a help desk password reset interaction and prepare for an expected line of questions. Hackers can thus practice such questions for a long time and extract any type of information that they like. Ultimately, this would help hackers to imagine counterarguments and adopt appropriate protective measures.
Can an Everyday Person Use ChatGPT to Carry Out Cyber Attacks?
Usually, ChatGPT has specific security settings which prevent it from providing answers to malicious codes and emails. However, if any malicious code can override the regular coding of ChatGPT, then hackers can continue to generate phishing emails.
The threat intelligence group manager at Check Point, Mr. Sergey Shykevich, reiterated that you don’t need any coding experience to write malware using ChatGPT. All you need is some knowledge about the functionality of the malware, and ChatGPT will do the rest by writing the code for you. Therefore, in today’s tech-savvy world, the real concern is that ChatGPT is allowing even the most low-skilled cyber attackers to create malware.
Mr. Sergey Shykevich added that in the long run, more sophisticated cyber criminals would pick up the pace with the trend and adopt the Open AI platform to address any gaps in their codes.
If you look at things from the perspective of an attacker, then ChatGPT is the perfect tool to create coding templates for one’s objectives and bridge any skill gaps that one might have.
What is ChatGPT Doing About It?
Recognizing the need of putting up more security barriers, Open AI makers are constantly working to improve their technology and limit its misuse in the long run. However, at this moment, without a speck of doubt, ChatGPT remains to be a glitch and hack-prone creation that even the most unskilled hackers can easily manipulate.
Another thing to note here is that these ChatGPT-enabled cyber attacks aren’t particularly new. If anything, they can be claimed to be a spin-off of the old cyber attacks that many competent hackers have trained themselves to pull off. However, the concern at hand is that ChatGPT has opened the doors for thousands more potential hackers to flourish, and this means that we all need to be prepared for more sophisticated cyber attacks.
In the long run, to make it more difficult for cybercriminals to misuse AI technologies, developers will need to pay close attention to improving AI engines and training their creations to identify malicious prompts. According to Mr. Shykevich, another way such misuse can be stopped in the future is to implement authorization requirements to use the OpenAI platform.
You can also take some additional measures to better prepare your organization for future cyber attacks. These include – preparing governance policies, conducting cyber awareness campaigns in the company, testing such awareness and training through phishing testing trials, and imparting basic education to your employees, friends, and family members about cyber attacks. Trust us when we say this; you’ll be glad you took these measures.