Exclusive: Scammers impersonate WPP’s CEO with fake WhatsApp, voice clone, and YouTube footage in virtual meeting
The CEO of WPP, Mark Read, was targeted in a sophisticated deepfake scam involving an AI voice clone. In a recent email to company leadership, Read detailed the scam and warned of fraudulent calls from top executives.
Scammers created a WhatsApp account with Read’s image and scheduled a Microsoft Teams meeting, seemingly with him and another senior executive. They used a voice clone and YouTube footage during the meeting, impersonating Read through the chat window. The fraudsters attempted to solicit money and personal information by asking an “agency leader” to establish a new business, but the scam was unsuccessful.
Fortunately, the attackers were unsuccessful,” Read noted in the email. “We must all remain alert to tactics that extend beyond emails to exploit virtual meetings, AI, and deepfakes.”
A WPP spokesperson confirmed the phishing attempt yielded no results, stating, “Thanks to the vigilance of our employees, including the executive involved, the incident was thwarted.” WPP did not address queries regarding the timing or the other executives besides Read who were affected.
Once primarily associated with online harassment, pornography, and political misinformation, the frequency of deepfake attacks in the corporate sphere has risen sharply in the past year. AI voice clones have deceived banks, swindled financial institutions out of millions, and put cybersecurity units on high alert. In a well-known case, a former executive of the now-defunct digital media startup Ozy pleaded guilty to fraud and identity theft. Reports revealed he used voice-altering software to impersonate a YouTube executive, attempting to deceive Goldman Sachs into investing $40 million in 2021.
The WPP scam also seemed to employ generative AI for voice cloning, alongside basic methods like using a publicly available image as a contact display picture. This incident highlights the array of tools scammers now use to replicate genuine corporate communications and impersonate executives.
“We have observed a growing level of sophistication in cyber-attacks against our colleagues, especially those aimed at senior leaders,” Read stated in the email.
Read’s email outlined several warning signs to watch for, such as requests for passports, money transfers, or any mention of a “secret acquisition, transaction, or payment known only to a few.”
“Just because the account features my photo doesn’t guarantee it’s me,” Read cautioned in the email.
WPP, a publicly traded company with a market capitalization of approximately $11.3 billion, has acknowledged on its website that it has been combating fake sites misusing its brand name and is collaborating with relevant authorities to halt the fraud.
A pop-up message on the company’s contact page warns, “Please be aware that WPP’s name and those of its agencies have been fraudulently used by third parties – often communicating via messaging services – on unofficial websites and apps.”
Numerous companies are contending with the surge of generative AI, redirecting resources toward the technology while simultaneously confronting its potential risks. Last year, WPP revealed a partnership with chip-maker Nvidia to produce advertisements using generative AI, presenting it as a transformative development in the industry.
“Generative AI is rapidly revolutionizing the marketing landscape. This innovative technology will reshape how brands produce content for commercial purposes,” Read stated in a statement last May.
In recent years, affordable audio deepfake technology has become widely accessible and significantly more convincing. Some AI models can create realistic simulations of a person’s voice using just a few minutes of audio, easily sourced from public figures. This capability allows scammers to generate manipulated recordings of nearly anyone.
The proliferation of deepfake audio has targeted political candidates worldwide but has also affected less prominent individuals. For instance, a Baltimore school principal was placed on leave this year due to audio recordings that appeared to capture racist and antisemitic remarks, only to be revealed as a deepfake created by one of his colleagues. Additionally, bots have impersonated Joe Biden and former presidential candidate Dean Phillips.