Google has revealed that it suspended over 39.2 million advertiser accounts in 2024, a sharp rise compared to previous years. According to the tech giant, most of these accounts were blocked proactively before they could serve malicious or harmful ads to users.
In its annual Ads Safety Report, Google detailed the scope of its enforcement actions, stating it blocked 5.1 billion “bad ads”, restricted 9.1 billion, and took action on 1.3 billion webpages in 2024 alone. Additionally, over 5 million advertiser accounts were suspended for scam-related violations, underscoring the scale of abuse on digital ad platforms.
By contrast, in 2023, the company had suspended 12.7 million advertiser accounts, stopped 5.5 billion harmful ads, and restricted 6.9 billion more. This year’s dramatic rise in account suspensions reflects a major shift in enforcement and detection capabilities, particularly through the use of artificial intelligence (AI).
Google said it is relying heavily on AI-powered systems to detect emerging abuse tactics and quickly respond to threats. These tools analyze patterns such as business impersonation, fake payment credentials, and AI-generated deepfakes, which are increasingly being used in scams involving public figure impersonation.
“We suspended over 700,000 accounts specifically for using AI-generated deepfakes in impersonation scams,” the company noted, adding that this was part of a broader campaign in which 415 million scam ads were blocked or removed.
Most Common Violations: Ad Network Abuse and Trademark Misuse
Among the top violations in 2024, Google flagged:
-
Ad network abuse – 793.1 million violations
-
Trademark misuse – 503.1 million
-
Misuse of personalized ad content – 491.3 million
-
Non-compliance with legal requirements – 280.3 million
-
Violations in financial services – 193.7 million
-
Misrepresentation – 146.9 million
Ads Restricted on Harmful and Sensitive Content
A significant number of ads were restricted or blocked on webpages hosting content related to sexual material, malware, shocking or derogatory themes, weapons, gambling, alcohol and tobacco, and intellectual property abuse.
Google said these enforcement actions are vital, particularly as malvertising continues to serve as a key vector for malware distribution, where fraudulent ads mislead users into visiting harmful websites.
ALSO READ: Call for Cyber Experts: Join FCRF Academy as Trainers and Course Creators
Expanding Global Verification and Election Ad Oversight
Google also ramped up its advertiser identity verification process, expanding it to over 200 countries and territories. As part of its election ad policy enforcement, more than 8,900 new election advertisers were verified, and 10.7 million election-related ads were removed for failing to meet verification standards. “The ad safety landscape is constantly evolving, influenced by AI advancements, new abuse tactics, and global developments,” the company stated. “Staying ahead requires agility, innovation, and vigilance from the entire industry.”
Growing Threat Landscape Demands Coordinated Response
The growing sophistication of online scams, particularly those leveraging artificial intelligence, has prompted tech companies to invest heavily in detection tools. Google’s aggressive push to shut down ad abuse reflects broader concerns around election integrity, online fraud, and user safety in the digital economy.
With its latest report, Google has signaled that it remains committed to upholding trust in the ad ecosystem—even as the nature of threats becomes more complex and global in scale.