It was bound to happen. When ChatGPT was designed, it was also a
signal that generative AI tools for hacking and cybercrime would follow.
WormGPT is a generative AI tool which is on the dark net.
It is being used to hack sites, steal information and do a lot of
other disruptive and damaging things on the net. Large Language Models (LLM)
are AI algorithms which improve with every use. Just like ChatGPT, WormGPT will
help a hacker to penetrate networks or secure portals and cause disruption.
Sometimes coders use ‘jailbreaks’ – effectively, engineers prompts and inputs designed to disclose
sensitive information, produce inappropriate content, or create harmful code.
The possibilities of using a Large Language Model to cause harm
are endless. They can be used to
generate dis and misinformation, shift public opinion, and even sway political
campaigns – the risks for unsuspecting users are endless, especially when you
add bootleg AI models to the mix.
Essentially it boils down
to the speed and number of scams a language model can generate at once, which
is obviously worrying when you consider how fast language models can generate
text. This makes cyberattacks such as phishing emails particularly easy to
replicate when put in the hands of even a novice cybercriminal.
Even attackers with
limited skills can use this technology, making it accessible for a broader
spectrum of cybercriminals.
If you receive any
communication from a hacker seeking information, it will be so well written,
that you may find it difficult to believe it is a hack. The grammar and
sentence construction will be of a very high standard. Genuine validations will
be provided to convince you of the legitimacy of the communication.
Accessing Worm GPT is risky to say the least. It resides in the
dark corners of the web, meaning it is not easily accessible. Only people with
malicious intent generally make use of it. One also has to pay a subscription
fee. If you download WormGPT, you will be monitored continuously. Your
intentions will be studied as the use of this tool is punishable by law.
New techniques are being developed by cybercriminals to do what is
called BEC. Business Email Compromise. This means impersonating high level
executives in an organization to deceive employees by writing emails which are
malicious. These messages are sometimes designed to bypass security systems in
companies to cause financial harm or just plain theft. Hack passwords, steal
data or disrupt a functioning system.
WormGPT is trained on
diverse data to make it more versatile. Cybercrime experts believe it is
already being used widely. This tool will help the criminals to increase the
frequency of attacks and make the attacks more sophisticated. It will be that
much more difficult to defend against and trace these attacks.
As of today, there is no protection from WormGPT if it is used
against you. Your best bet is to protect your system by busing antivirus software
and being alert. In a sophisticated cyberattack using tools like Worm GPT, it
is the operator who may become the weak link.
Meanwhile, companies like
OpenAI and Google are taking steps to combat the abuse of large language models
(LLMs).
As AI makes progress and
becomes more advanced, expect more such harmful applications to surface.
No comments:
Post a Comment