Throughout history, there were always people who created things for the development of our society and those who never missed the opportunity to take advantage of that. The recent technology to seize "people's imagination and excitement", generative AI, isn’t an exception. Just like it was with cryptocurrency, artificial intelligence has attracted the attention of "bad actors", according to a research report from Meta's security team.
The released report reveals that malware operators, scammers, and spammers have increasingly been using artificial intelligence tools, especially the most famous ChatGPT, to distribute malicious links, harmful files, and scam schemes. Meta stated that they found lots of malware posing as ChatGPT and similar tools in March alone, some of which were hidden in various browser extensions and plugins. The malefactors have moved to AI because it is the latest wave of technology that is used by a huge mass of people (ChatGPT alone has more than 100M users), so it increases the possibility of a successful scam.
ChatGPT-related M alware on the Rise, Meta Says
—Slashdot (@slashdot) May3,2023
https://t.co/eEJRVb9qkI
Nathaniel Gleicher, Meta’s head of security policy, and Ryan Victory, malware discovery and detection engineer, stated, "As part of our most recent work to protect people and businesses from malicious targeting using ChatGPT as a lure, since March 2023 we've blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies and reported a number of browser extensions and mobile apps to our peer companies".
The report highlights the need for caution against these malicious actors in the generative AI space. Even without fraud schemes, AI itself can be deceiving and used to promote certain ideas in dishonest ways. Then with malicious intent, it’s a real threat that can spread malware, damaging many private and corporate IT systems, causing data leaks, etc.
Now you can tap with your finger 👉 instead of your controllers on @MetaQuestVR —so navigating #virtualreality feels as natural as swiping through your phone.
—Meta(@Meta)February21,2023
Readmore:https://t.co/50zg3u3bsx pic.twitter.com/rnzgrGxrlS
Even though Meta revealed such an important issue, it doesn’t help the situation that the company heavily invests in new generative AI, which can become another platform for malware distribution.
Summing up, despite the great capabilities that ChatGPT grants to its users, there are also dangers that we all should be aware of to not fall victim to scammers. Undoubtedly, Meta and other large companies that develop AI should take the safety matter more seriously and provide security.
Many researchers see other, more global threats in the advancement of artificial intelligence development fearing that AIs will bring danger to our society.
All this malware situation is just another proof that we should put more safeguards and censorship over AI models just like Nvidia did to ensure more control over technology.