It’s not just celebrities who face deepfake attacks. Synthetic content targeting lower-profile public figures (like influencers and senior executives) is confusing audiences, damaging reputations, and putting careers at risk.
As technology gets better and easier to access, fraudsters can now create and share videos that mimic faces, voices and personas in a matter of minutes. And, due to the viral nature of the internet, these spread long before the law can react.
SnapDragon has already been at the front line, shoulder to shoulder with public figures who have been hit by deepfake attacks. We understand the shock and sense of violation, but most importantly, we know how to fight back and take fakes down. Read more in this case study example.
While legislation scrambles to catch up, our AI-driven, legally informed approach gives law firms and their clients the power to monitor, detect, remove, and investigate deepfakes at speed and scale.
Where the law currently stands on deepfakes
Legislation is starting to catch up with the deepfake threat, but only in part. Laws in the UK, US and Europe now address intimate image abuse and mandate greater transparency, yet the wider risks of impersonation, brand harm and misinformation still fall largely outside their reach.
Deepfake laws in the United Kingdom
In the UK, deepfakes fall under the Online Safety Act 2023, which makes it a criminal offence to share or threaten to share intimate images — including those that appear to show someone, such as AI-generated content — without their consent.
The Government has gone further, announcing new offences that will criminalise the creation of explicit deepfake material, with penalties of up to two years in prison. While these reforms close critical gaps, they remain focused on sexual and intimate image abuse; other uses of deepfakes, such as impersonation or brand harm, are not yet comprehensively covered.
Deepfake laws in the United States
In the US, the TAKE IT DOWN Act (2025) makes it a federal crime to knowingly publish or share non-consensual intimate images, including deepfakes, and requires platforms to remove such content within 48 hours of notice.
Several states have also enacted their own laws — such as Tennessee’s ELVIS Act, which protects against AI-based misuse of voice and likeness. However, US legislation is fragmented, with most measures focused on intimate imagery rather than wider harms like political deepfakes or brand impersonation.
Deepfake laws in the European Union
In Europe, the AI Act (in force from 2024, with full compliance by 2026) directly addresses deepfakes by requiring clear disclosure when content has been artificially generated or manipulated. It also prohibits harmful AI identity manipulations that could mislead or deceive.
While member states are beginning to introduce laws against intimate image abuse, the AI Act’s focus is broader — tackling systemic risk, transparency and misuse — leaving gaps in enforcement for personal harm such as sexual deepfakes, or even blackmail.
The rising tide of malicious deepfakes
The deepfake threat isn’t on the horizon, it’s here already.
According to a report by Helpnet Security, one deepfake attack takes place every five minutes – equating to more than 100,000 incidents per year.
This poses a huge threat to corporations as well as individuals, with Regula Forensics stating that 49% of companies were targeted by both audio and video deepfakes in 2024 (up from 37% for audio and 29% for video in 2022).
Deepfake trends to watch
While the European Parliament estimates that 98% of deepfakes shared in 2025 is likely to be pornographic material, synthetic audio (voice cloning) and video tampering are now frequently used for impersonation scams or manufactured endorsements.
1. Hyper-realistic voice cloning
Voice cloning has become one of the most dangerous deepfake tools. With less than a minute of recorded speech, attackers can now generate convincing audio that mimics tone, accent, and emotion.
Because it’s faster and cheaper than video manipulation, voice is becoming the go-to weapon for impersonation, fraud, and social engineering.
The damage to reputation can be dramatic. Experts have warned that AI tools are now capable of cloning celebrity voices with alarming accuracy, often without consent or oversight. The Guardian reports that stars including Jennifer Aniston, Oprah Winfrey, and David Attenborough have all fallen victim, with Attenborough saying: “I am profoundly disturbed to find that these days my identity is being stolen by others and I greatly object to them using it to say what they wish.”
The realism of these imitations is already outpacing regulation, blurring the line between artistic use and identity theft.
2. Cross-channel deepfake fraud
Deepfake attacks are no longer confined to a single channel. Fraudsters now combine manipulated video, synthetic audio, forged documents, and behavioural cues to build convincing, multi-layered scams.
A target might receive a deepfaked video message, reinforced by a follow-up phone call using a cloned voice, and then “verified” with fake, but credible paperwork.
The effect can be devastating. In one case, a Hong Kong company lost $25 million after staff were tricked by a fabricated conference call. And consumers are being targeted in similar ways.
As reported by France 24, scammers have used deepfake videos of US politicians – including President Trump – in social media ads promoting fake stimulus checks and government benefits. These schemes exploit public trust in familiar figures to steal money and personal data, reaching tens of thousands of users before removal.
3. AI-generated sexual exploitation
Deepfake sexual exploitation is one of the fastest-growing threats because attackers don’t need access to real intimate images. Advances in technology allow them to fabricate explicit material from ordinary photos. They can use it to demand money or silence, or to cause reputational damage.
For example in January 2024, explicit AI-generated images of Taylor Swift spread virally across social platforms. Her fans mobilised to mass report the images to Meta, but many were viewed tens of millions of times before the platform was able to react.
The fakes sparked global concern about the ease with which intimate imagery can be manufactured, monetised, and weaponised. People may face threats even if they never shared compromising content, because it can be entirely manufactured.
SnapDragon goes where the law doesn’t
SnapDragon isn’t waiting for the laws to catch up; we’re already taking action.
Our platform monitors the channels where deepfakes thrive, detects the tell-tale signs of synthetic content with advanced AI, and fights threats fast.
Beyond removal, we investigate the networks behind the deepfakes, linking accounts and uncovering bad actors to build cases that stand up in court.
For law firms, influential people, and brands, this means one thing: protection that moves at the speed of the threat. Book a free consultation to find out how we can help you protect your clients.
Laura Sodaymay
Brand Protection Specialist
Want to see how SnapDragon’s AI can protect your clients from deepfakes?
If you would like to explore how SnapDragon can strengthen your brand protection services, get in touch to schedule a demo. We would love to show you what’s possible.

