Skip To Main ContentClarity Global

Corporate reputation's biggest battle - the rise of the deepfake

It’s no secret that security breaches have a direct impact on an organisation’s reputation. An alarming report by insurer Hiscox from the end of last year stated that 67% of organisations had experienced more cyber incidents in the past 12 months than the previous year, 47% of organisations had greater difficulty attracting new customers following a cyber attack, 43% lost customers and 38% experienced bad publicity for their business.


This is bad news.

From Phishing to Vishing and Deepfakes

Remember the days when you got an email pretending to be from your boss or CFO asking you to transfer some money into an account to support good causes? A lot of people felt very uncomfortable questioning their superiors and went ahead and transferred the money. $44.2 million was stolen by cyber criminals through phishing attacks in 2021 and phishing is still the most common form of cyber crime, with an estimated 3.4 billion spam emails sent every day.

This is also bad news. With the emergence of smarter predators using AI, it’s also getting worse. 

In 2024 AI changed the game, perhaps most notable in the space of Voice Phishing (Vishing) and Deep fakes. Static impersonated emails have been replaced by ‘real’ voices and videos. Through clever social engineering techniques using generated voice and video AI, criminals are creating near perfect videos and phone calls of your boss. According to the GSMA, a non-profit that represents mobile network operators, hybrid vishing attacks (using multiple forms of comms) surged by 55§4% in 2023. When it comes to deepfakes, nearly 50% of global organisations have now been victims of deepfake attacks and according to consulting and accounting firm  Deloitte deepfakes will cost $40 billion by 2027. 

The cost is one way that organisations hurt. The other one is the reputational damage caused by deepfakes spreading misinformation about companies and people. Criminals are now cloning senior executives' voices and creating fake speeches or interviews that could destroy trust in seconds.

Creating control mechanisms and building resilience 

Fraud

With organisations increasingly at peril, how can they build their right resilience to defend themselves against the rise of these new threats and protect their reputation -  and even corporate survival?

Here are some important aspects to consider when the organisation is exposed to AI fraud related activities according to Clarity’s CTO Will Julian Vicary and Partner/Managing Director  of Clarity Benelux Thomas Cordes.

  • If the request is remotely sensitive or involves financial information you should always verify. You can ask a question that AI is not (yet) able to answer: something that you/the real person should both know and is not public information
  • Attackers often communicate on unusual means of contact (i.e WhatsApp rather than Slack/Email). Never feel pressured to give information over these channels 
  • Keep vigilant, if you're the target of an attack it may look very realistic, even down to video and voice - if anything smells off cut them off and find a means of verifying the contact is legitimate. Things to look out for include ‘metallic’ tone of voice, static eye movement or a lag between video or voice responses

Misinformation

When it comes to protecting the organisation against reputational damage caused by AI and deepfake misinformation or accusations, there are two aspects to consider both of which need to have a central place in any crisis management planning: the damage caused by the accusation/misinformation itself and secondly the public’s perception that the organisation might lack the controls required to defend itself against this form of attack.

Key areas to follow include:

  • Ensure all employees have had training in how you detect an attack, and the organisational process to follow in terms of incidence reporting and response. This training should ideally be given by both IT and comms professional
  • Include deepfake and AI incidents as a high potential risk for any scenario planning in crisis planning workshops. This means covering all potential misinformation and accusations scenarios possible 
  • Having a robust answer in place for all processes and tools when an incident happens to ensure people understand there is governance and resilience in place. This is key to gain trust with the public 
  • Have a trained AI expert spokesperson (usually CTO or CIO) who can clearly explain how the technology works and how to deal with complex situations which is key for any misunderstanding or misreporting 
  • In the case of a person in the organisation having been imitated, it’s important that the person quickly publicly refutes any claim and clarifies the situation. This is essential to regain control and trust
  • Work very closely with your community manager to ensure social media channels are monitored and managed as misinformation spreads instantly in social media environments.
  • Where possible, have a mechanism in place that allows you to quickly remove any video or audio misinformation
  • Have third party advocates such as partners, investors or customers who can speak on your behalf as this is key to tackle any counter narrative that might have been created as a result of the misinformation.    

Corporate reputation will be one of the most closely watched areas in 2025 and it's facing risks on multiplier fronts. When it comes to AI and deepfakes, preparedness is everything.

Stay up to date

Receive all the latest news, events, and insights on B2B tech, marketing, and communications with Clarity’s free monthly newsletter.

Loading...

M

Fearless tactics to achieve your strategic success

As a consultancy, our full-funnel marketing and communications solutions are designed to fearlessly deliver business results across multiple industries and service areas.

GET
CLARITY

Looking for a partner to help you reach your goals? We’d love to hear from you.

Loading...