- Scammers, Hackers and Spammers are riding on the wave of artificial intelligence technologies such as ChatGPT and OpenAI to create sophisticated scamming mechanisms.
- The FBI and Facebook’s Meta are some of the organizations that have issued security alerts.
- There is a need to educate the public and especially the investors to be more vigilant because what looks genuine may be fake.
The use of artificial intelligence (AI) has elicited debates in recent times. This is because that which was designed to bring freedom, improve quality of life and make work easy, is now being exploited by fraudsters to steal from investors.
The use of AI in scams is becoming sophisticated and prevalent so much so that it has caught the eye of the U.S. Federal Bureau of Investigation (FBI). For instance, the security agency issued an alert on June 5, stating that malicious actors were creating synthetic content (commonly referred to as “deepfakes”) by manipulating videos and photographs showing innocent people engaged in sexually explicit activities.
According to Guy Rosen, Chief Information Security Officer for Meta, Malware operators and spammers are very attuned to what’s trendy at any given moment. At the moment, generative AI technology is what has mesmerized people and current malware campaigns are following the trends.
AI – Scammers’ Playground
Social media platforms have become scammers’ playgrounds where scammers utilise AI-powered tools to create fake accounts with large fanbases with intentions of using synthetic credibility to scam unsuspecting investors and vulnerable people.
For example, Scammers can use AI-driven chatbots or virtual assistants to lure unsuspecting investors and later scam them through fake tokens or initial coin offerings, or some seemingly high-yielding investment opportunities.
Social media platforms like Instagram are providing scammers with authentic photos which are being used by fraudsters for deep fakes with the purpose of extorting victims. The situation has become prevalent, going by the reports from the FBI of 7,000 online extortions reported last year.
Facebook’s parent Meta warned about an uptick in malware disguised as ChatGPT-related software. In a note posted on their website, Meta said”
“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”
ChatGTP-themed malware’s goal is to run unauthorized ads from compromised business accounts across the internet. An identified malware strain known as Nodestealer can steal computer passwords through cookies and login information stored on the browser.
Because these malware can be stored in services such as Dropbox, Google Drive, MediaFire, Discord, Trello, Microsoft OneDrive and iCloud, it further makes it difficult to hunt down the culprits.
Cryptosphere
Social Media and cryptosphere are closely linked, as what is done there finds its way into the crypto space. For example, deep fakes can be used by scammers to create artificial faces, voices and images of well-respected personalities endorsing a scam crypto project.
By the time investors realize that images are fake, scammers will be long gone, with their hard-earned income. Very realistic content can also emanate through deep fakes, where realistic audio voices and images are used to accompany online content or sales materials.
Scammers can now automate elaborate pump-and-dump schemes, artificially inflate the value of tokens, and utilize their fake accounts with large followings to lure unsuspecting investors, only to sell their holdings at significant losses while leaving their investors with huge losses.
There is a rise in the use of AI-linked keywords in the crypto space. A search of “ChatGPT” and “OpenAI” on Dextool ; an interactive crypto trading platform shows hundreds of pairs trading using the AI-powered keywords.
There is a need to be more vigilant in order not to fall into these traps, for example, users should visit official websites before taking action or making investments.