Meta’s security team fights against ChatGPT scammers

  • ChatGPT and AI are the newest vectors for malware. 
  • Meta prevented ChatGPT users from being scammed. 
Meta’s security team fights against ChatGPT scammers

Throughout history, there were always people who created things for the development of our society and those who never missed the opportunity to take advantage of that. The recent technology to seize "people's imagination and excitement", generative AI, isn’t an exception. Just like it was with cryptocurrency, artificial intelligence has attracted the attention of "bad actors", according to a research report from Meta's security team.

The released report reveals that malware operators, scammers, and spammers have increasingly been using artificial intelligence tools, especially the most famous ChatGPT, to distribute malicious links, harmful files, and scam schemes. Meta stated that they found lots of malware posing as ChatGPT and similar tools in March alone, some of which were hidden in various browser extensions and plugins. The malefactors have moved to AI because it is the latest wave of technology that is used by a huge mass of people (ChatGPT alone has more than 100M users), so it increases the possibility of a successful scam.

Nathaniel Gleicher, Meta’s head of security policy, and Ryan Victory, malware discovery and detection engineer, stated, "As part of our most recent work to protect people and businesses from malicious targeting using ChatGPT as a lure, since March 2023 we've blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies and reported a number of browser extensions and mobile apps to our peer companies".

The report highlights the need for caution against these malicious actors in the generative AI space. Even without fraud schemes, AI itself can be deceiving and used to promote certain ideas in dishonest ways. Then with malicious intent, it’s a real threat that can spread malware, damaging many private and corporate IT systems, causing data leaks, etc.

Even though Meta revealed such an important issue, it doesn’t help the situation that the company heavily invests in new generative AI, which can become another platform for malware distribution.

Summing up, despite the great capabilities that ChatGPT grants to its users, there are also dangers that we all should be aware of to not fall victim to scammers. Undoubtedly, Meta and other large companies that develop AI should take the safety matter more seriously and provide security.

Many researchers see other, more global threats in the advancement of artificial intelligence development fearing that AIs will bring danger to our society.

All this malware situation is just another proof that we should put more safeguards and censorship over AI models just like Nvidia did to ensure more control over technology.

Nataliia Huivan
Nataliia Huivan
Professional author in IT Industry

Author of articles and news for Atlasiko Inc. I do my best to create qualified and useful content to help our website visitors to understand more about software development, modern IT tendencies and practices. Constant innovations in the IT field and communication with top specialists inspire me to seek knowledge and share it with others.

Share your thoughts in the comments below!

Have any ideas or suggestions about the article or website? Feel free to write it.

Any Questions?

Get in touch with us by simply filling up the form to start our fruitful cooperation right now.

Please check your email
Get a Free Estimate