Resources > Blog > The AI Gold Rush: ChatGPT and OpenAI targeted in AI-themed investment scams

The AI Gold Rush: ChatGPT and OpenAI targeted in AI-themed investment scams

Investment scams and AI – a match made in heaven?  

Online investment scams are a big money spinner for criminals, accounting for $4.6B of losses in the US. With the explosion of interest in artificial intelligence (AI) following the release of OpenAI’s ChatGPT in late 2022, it was perhaps inevitable that criminals would look to jump on the bandwagon to promote a new generation of bogus investment products that claim to “[harness] the power of AI.” 

Netcraft has uncovered a range of malicious sites using ChatGPT and OpenAI-themed content to attract would-be investors looking to take advantage of the rise of generative AI. Many tout the use of “advanced trading technology,” promising outlandish returns, and feature bogus success stories. Once lured in, would-be investors are tricked into making payments that inevitably never result in the promised returns.  

In this blog, we’ll walk through some of the examples we’ve found. 

“ChatGPT platform” with fake Sam Altman and Elon Musk videos 

One such investment scam campaign blatantly impersonates ChatGPT, claiming to be powered by the popular generative AI platform, allowing it to “imitate the thinking of analysts.” Seeking to establish credibility, this scam claims more than 1 million registered users and $68 million invested each month. Particularly implausible, given the domain name had been registered eight days prior. 

Figure 1 Fake investment platform masquerading as ChatGPT – hxxps://lifecovewe[.]world. 

The site also includes a poorly crafted video that attempts to fool the visitor into thinking it is a genuine endorsement from Sam Altman (the CEO of OpenAI). It espouses the increasing power of machine learning, with the tool being able to “analyze the market situation and correlate data in real-time”. With rapid progress being made with deepfakes, it is only a matter of time before videos created by criminals like this can be significantly more convincing.  

Figure 2 Video featuring doctored audio purporting to be the words of ChatGPT’s Sam Altman 

Other sites in the same campaign feature promotional videos of Elon Musk, who initially served on OpenAI’s initial board of directors. Musk is frequently impersonated in many types of scams, including the LinusTechTips hack that Netcraft reported last year

Fake ‘Interactive’ ChatGPT trading bot 

ChatGPT gained popularity by allowing back-and-forth text-based conversation with a chatbot based on OpenAI’s GPT large language models. Leveraging this popularity, a Slovakian investment scam claims users can “earn up to 10,000 euros per month on the unique platform ChatGPT bot.” To qualify, victims must simply select from a series of multiple-choice questions presented within an interface designed to mimic a ChatGPT conversation. 

Figure 3 ChatGPT-themed investment scam – hxxps://on2[.]luck-page[.]quest/. Translated from Slovak. 

Of course, there’s no genuine interaction between the fake bot and the user. Regardless of the victim’s responses the site eventually displays a contact details form to collect personal data to continue the scam. 

Figure 4 The final form to collect the victim’s details. 

Deceptive OpenAI initial public offering (IPO) scam  

Another common scam type are sites that provide exclusive access to shares and unmatched returns in advance of a company IPO. These investment scams are so frequently used by criminals that both the SEC (USA), and ASIC (Australia) have issued consumer alerts regarding IPO scams. As would be expected, OpenAI, one of the world’s most valuable private companies, has been targeted in these pre-IPO themed attacks. 

One such scam specifically targets French-speaking users, claiming that OpenAI is having its own IPO and promises growth forecasts of over 150%. Once again, the site features a single form to capture the victim’s name, email address, and phone number. 

Figure 5 Fake site claiming exclusive access to ChatGPT IPO 

Fake endorsements from politicians and journalists 

Another GPT-themed campaign, spread via email, impersonates a news article from the trusted French newspaper Le Monde

The scam website, a convincing imitation of the established news outlet, features a completely fictitious interview between French politician Jean-Luc Mélenchon and political journalist Gilles Bouleau. It claims that Mélenchon has uncovered secrets of an AI-powered GPT-based trading bot, “TradeGPT 500 Force,” that can help French citizens unlock incredible returns. Claiming an investment of 250 euros can turn into a million “in 12 to 15 weeks.” 

Figure 6 Le Monde impersonation investment scam. Translated from French 

Similar exploits were reviewed in the Netcraft blog post on health product scams, in which we identified Fox News, the Daily Mail, the Today Show, and the New York Times as commonly impersonated news sites. In the ‘Le Monde’ scam, the page includes fake comments, each with a fake identity and a phony success story. The links in the phishing email containing the fake article take the victim to a site containing the typical contact details form. 

Figure 7 The destination site from the fake Le Monde article. Translated from French. 

ChatGPT-themed social media advertising  

One mechanism employed by criminals to identify victims on messaging platforms is to run advertising campaigns, with ads linking to investment scam lure sites previously discussed. Netcraft has identified fraudsters launching various advertising campaigns for what they claim are ChatGPT-powered trading robots. 

Figure 8 Campaigns on Facebook and Instagram containing links to investment scams 

Explore the Netcraft difference 

Netcraft provides cybercrime detection, disruption, and takedown services to organizations worldwide, including 12 of the top 50 global banks and the biggest cryptocurrency exchange ranked by volume.  

Netcraft’s brand protection platform operates 24/7 to discover phishing, fraud, scams, and other cyber attacks through best-in-class automation, AI, machine learning, and human insight. Our disruption & takedown service ensures that malicious content is blocked and removed quickly and efficiently—typically within hours.  

Curious about how we can protect your organization from the threat of AI-themed and other phishing attacks? Let us show you, book a demo today at, or find out more by visiting