Organisations are aware of the imminent threat from fraudsters using AI tools such as deepfakes and voice cloning to target contact centres. And although they may not be fully protected yet, many have some kind of plan in place to spot fake callers and other automated scams. Consumers, however, remain largely unprotected – and that is a problem not just for them, but for financial institutions too.

Consumers are increasingly vulnerable to fraudsters targeting them on their phones and through social media, using various tricks and scams. Through what is known as social engineering, fraudsters are increasingly able to convince their victims to part with their money. Techniques include the ‘safe account’ scam, where fraudsters convince you they are calling from your bank to tell you your account has been compromised and help you move your money to a different account.

Another common example is the ‘Hi Mum’ con where scammers posing as the children of their potential victims email hundreds of targets with a text or WhatsApp message claiming that they have a new number because they have lost or broken their phone. Their goal is to convince you to send some money to help your ‘child’ and is increasingly successful.

Unfortunately, examples like this are becoming increasingly common, with AI making these scams easier and more convincing. Indeed, generative AI helps fraudsters find your personal information, photos and videos and use them to create fake messages and even clone your voice or create ‘deepfakes’ designed to trick even the most savvy people into handing over their money. Imagine thinking you’re hearing your child’s voice on the phone!