AI Scams

AI is a continually evolving technology and one which seems to gain more and more traction with each passing day. As such, this technology is becoming increasingly complex and intricate. Unfortunately, this has become an attraction for a lot of scammers, looking to use AI in order to manipulate and scam victims. AI scams are on the rise and, in line with the growing improvement in AI technology, they’re set to become more common and more enhanced. 

At Wealth Recovery Solicitors, we’re helping victims of numerous different AI scams that have been designed to manipulate and trick them out of money and sensitive personal information. There are many different tactics and strategies being used in AI scams by the scammers, which is why it is important to familiarise yourself with what AI scams are and how you can keep yourself safe. 

Contact us to arrange a free consultation

What Is An AI Scam?

An AI scam, or artificial intelligence scam, uses AI technology to create the basis of the scam. Different forms of AI scams include the use of AI-generated video and audio, AI images or AI-written forms of text. AI scammers will, typically, use generative AI tools such as generation software or chatbots, like ChatGPT and Gemini, to create and form these scams. 

AI software is, in a lot of cases, free to use and this means that it is easily accessible to anyone who has a connection to the internet. Because AI is picking up legitimate momentum and being used in a number of ways across different sectors, this means that it is becoming more mainstream and available, therefore making it easier for scammers to use the technology for AI scams. This has led to an increase in the number of scams using AI. 

Artificial intelligence makes it easier for scammers to make AI scams more convincing and harder for victims to spot. We’re working with victims of AI scams to help recover money they’ve lost, whether through cryptocurrency scams, online banking or through other forms of online fraud

What Types Of AI Scams Are There?

Deepfakes

A deepfake scam involves altered videos or images being used to make it seem as though someone – likely a trusted person or celebrity – is doing or saying something that in reality, they never did. With deepfake AI scams, scammers will use large databases of images and video and audio clips to replicate the voice and actions of the person in the video. They will use special software to then make the ‘person’ say or do something which will encourage victims to do as asked, before this is posted online. 

This type of AI scam is already in circulation, with a large focus on financial fraud schemes, and a number of celebrities have been used to target victims into parting with money and personal information. One such celebrity is Martin Lewis – a trusted financial advisor – who has been used in different deepfake AI scams that encourage victims to invest in new schemes or sign up for services. Deepfake AI scams can also be used to steal identities or to pass online verification checks, such as for online banking, which then allows the scammer to access the victim’s accounts. 

How To Spot Deepfake AI Scams

  • Deepfake videos use lipsynching, so watch for sound/lip movement which is slightly off.
  • Look at the video detailing – the hair style of the ‘person’, lighting and any visible blurring could all make the video look real at first glance, but could look fake the closer you look.
  • Although AI is highly advanced, it hasn’t yet mimicked the real life movements of people yet. As such, look out for unnatural expressions or movements, such as reduced blinking or jerky movements.

 

Voice Cloning Scams

A voice cloning scam involves a person’s voice being recorded and then used to create AI phone calls of that person speaking. AI scammers will use such recordings in order to make it sound like they have said anything they wish them to. 

Voice cloning scams can be very convincing, replicating both the tone and language of an individual. Some distressing forms of voice cloning scams include scammers making calls to parents, using AI-generated phone call audio to make it sound as though their children are in distress, or requiring money to be sent due to them losing their phone or bank card. 

According to leading security provider McAfee, it takes just three seconds of audio for AI phone call scammers to generate a recording for a voice cloning scam. With just a short snippet of audio, the scammers can then create messages which are both urgent and distressing in nature, but ultimately the goal is the same – for the victim to send money. 

How To Spot Voice Cloning Scams

  • AI phone calls will likely not involve much in terms of a conversation – the ‘person’ on the other end of the call will likely just use short phrases, such as “help me” or “I need help”. Try to ask the caller for as much information as possible, as only a real person will be able to reply. As well, if you suspect a voice cloning scam, then you can try laughing, as AI will have difficulty responding. 
  • Listen for any unusual background noises, or unexpected tones in the voice of the ‘caller’ as these could indicate you’re not having a real conversation. 
  • If you’re asked to make payments in cryptocurrencies, then this is typical of voice cloning scams. 

 

Phishing Messages

Phishing scams are not new, with scammers long sending communications such as emails and text messages in an attempt to deceive victims into believing them to be from a trusted source. However, AI has completely changed the way that scammers can produce these communications. With AI tools such as ChatGPT, AI scammers can now easily impersonate the tone and style of messages from wider trusted sources for free and within just a few minutes. 

How To Spot Phishing AI Scams

  • Communication from genuine sources will address you by your name and will also likely include personal information that only they would know, such as your phone number or part of your account or other identifying information. 
  • Any messages which are received asking for personal or financial information should be treated with caution, as this is a way for phishing AI scammers to get this information from you.
  • Almost all phishing emails contain a link and, even though legitimate and trusted companies will send messages that contain links, you should always hover over the included link to see if it goes to an official website, or alternative, before clicking. 

Protecting Yourself From AI Scams

AI scams can be incredibly convincing. But, there are still steps you can take to protect yourself from falling victim to them. 

Be Cautious Online

AI scams, such as deepfakes, can look, sound and feel real, but it’s important to keep your guard up when it comes to communications from unfamiliar emails, numbers or social media accounts. 

Don’t Take Action If You Feel Pressured

Scammers will try to pressure you into acting quickly so that you don’t question too much. Sometimes, they may even try to tell you that if you hesitate, then you will face a fine or miss out on a great opportunity. If communication begins to feel urgent, or you are being rushed into making a decision, then avoid sending any money or parting with personal information. Legitimate companies and businesses will still be there in a few hours or days. 

Avoid Clicking On Links

If you receive messages, emails or comments on social media with any links, then the best thing to do is to avoid clicking on them. You can always hover over the link, or right click to find the source. 

Have You Been The Victim Of An AI Scam?

If you suspect that you have been the victim of an AI scam, then we recommend getting in touch with our team as soon as possible. At WRS, we’ve successfully recovered over £25 million for our clients, and have the processes in place to trace and recover finances lost to scams and fraud, including AI scams. Arrange a free consultation with our UK-based team of solicitors today. 

AI scams work similarly to other types of fraud and scams, in that scammers will look to manipulate and deceive victims into sending money, or sharing personal details. There are many different types of AI scams in circulation, but they all look to achieve the same goal.

Whilst it is now easier than ever for scammers to use AI in order to help them carry out a scam, there are some telltale signs of what could seem like genuine communication is actually a scam.

Although scammers can clone a voice or use AI to make messages seem like legitimate communications, they’re not able to clone someone’s memories, knowledge or personality. If the person on the other side of the communication can’t give you straight answers or gives incorrect ones, is contacting you from an unknown number or email address or is making unusual or seemingly urgent demands and requests, then it’s always best to remain aware as it could be an AI scam.

Yes, AI scams are a big problem and one which is only set to advance in the near future. AI-driven technology is continually evolving and becoming more enhanced and criminals are always on the lookout for ways that not only make their scams easier to carry out, but result in more rewards for them. AI scammers are using these sophisticated and tech-driven advancements to target victims using AI and make it harder for them to be identified once the crime has been committed.

It is believed that, in the first quarter of 2024, over £1 billion has been lost to AI scams. In a recent study, it was found that 48% of people feel at increased risk of scams, with scammers using more sophisticated and technologically advanced methods. We encounter a wide range of scams and fraud methods here at WRS, but AI scams are seemingly more dangerous due to the fact that they are created to look and feel real. AI scammers are also using highly emotive and distressing tactics in order to dupe victims out of money or information.