Amid rising tensions in West Asia, social media platforms have witnessed a surge in AI-generated war-related videos. In response, the platform owned by Elon Musk, X, has announced strict measures against misleading AI-generated content. The company has made it clear that accounts posting AI-generated war videos without proper disclosure will face strong action, including suspension from its revenue-sharing program for up to 90 days.
According to the company, misleading videos tend to spread rapidly during sensitive periods such as armed conflicts, potentially escalating confusion and global tensions. In recent days, numerous videos related to the conflict involving the United States, Israel, and Iran have circulated online, many of which have been flagged as questionable or unverifiable.
Major Policy Shift
In an official statement, the platform’s Head of Product, Nikita Bier, said the company is revising its policies to ensure that accurate and reliable information reaches users, particularly during times of crisis. He acknowledged that advances in artificial intelligence have made it significantly easier to create highly realistic fake videos capable of misleading audiences.
The company revealed that in a recent case, an individual was found operating more than 30 accounts that were posting AI-generated war videos. Several of these accounts had reportedly been hacked and rebranded to appear authentic.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
‘Made with AI’ Label Now Compulsory
Under the updated policy, users posting AI-generated war-related videos must clearly display a “Made with AI” label. The label is intended to inform viewers that the content is artificially created and not genuine footage.
Failure to disclose AI-generated content will result in a 90-day suspension from the platform’s creator revenue-sharing program. Repeated violations may lead to permanent removal from the program. The company stated that the move aims to enhance transparency and curb the spread of misinformation.
Rising Traffic, Greater Responsibility
With global tensions intensifying, platform traffic has reportedly reached record levels as users turn to social media for real-time updates. The company emphasized that responsible content moderation and transparency are now more critical than ever.
Recent rumors surrounding Iran’s Supreme Leader Ayatollah Ali Khamenei further fueled speculation online, leading to a flood of unverified videos and claims. Experts warn that AI-generated deepfake videos make it increasingly difficult for ordinary users to distinguish between authentic and fabricated content.
Strong Message Against Misinformation
Technology analysts believe AI-powered deepfakes and fabricated war videos can mislead the public, influence diplomatic relations, and even impact financial markets. The platform’s latest action is being viewed as a significant step toward establishing accountability in the digital ecosystem.
The company has indicated that it may introduce more advanced tools in the future to detect and label AI-generated content more effectively. For now, users are being urged to exercise caution and verify sources before sharing sensitive material.
In an era where information spreads instantly, the balance between speed and accuracy has become crucial. The new policy reflects an attempt to maintain that balance while addressing the growing risks posed by AI-generated misinformation.
