What began as a lighthearted fashion experiment has now turned into a privacy debate. The “Banana AI saree trend”, powered by Google Gemini’s Nano Banana model, has gone viral on Instagram, with users transforming their images into elegant saree portraits set against vintage-style backdrops.
But one woman’s unsettling experience has sparked widespread concern. In a video that has now been viewed over seven million times, she revealed that Gemini-generated edits of her saree photo included a detail she never expected: a mole on her body that wasn’t visible in the uploaded picture.
“How Did Gemini Know?”
In the viral Instagram video, the user described her shock:
“I generated my image and I found something creepy… I uploaded my image on Gemini with a saree prompt. When I saw the result, Gemini had added a mole in the exact spot where I have one. It wasn’t in the photo I uploaded. How did it know? This is very scary and creepy.”
She urged others to be cautious, warning: “Please be safe with whatever you upload on social media or AI platforms.”
India to Honour Top CISOs from Police, Law Enforcement, and Defence Forces
A Flood of Similar Stories
The post has since attracted hundreds of comments, with users reporting similar experiences. Some claimed that tattoos not visible in their uploaded pictures appeared in AI-generated edits. Others speculated that Gemini was drawing from users’ past photos and digital footprints.
One comment read: “Everything is connected. Gemini belongs to Google. They have access to your photos and videos, and that’s how they make these edits.”
Another added: “This happened to me too. My tattoos, which weren’t visible in the picture I uploaded, showed up in the AI output. I still don’t know how.”
A more technical explanation offered by some commenters suggested that AI pulls from an individual’s online presence to create realistic images, analyzing multiple data points for accuracy.
What Is Google’s Nano Banana?
The Nano Banana model, a feature within Google’s Gemini app, first gained traction for producing 3D figurine-like edits. Its capabilities quickly extended to stylistic filters, including the now-famous saree trend.
By allowing users to upload photos and apply prompts, Nano Banana creates hyper-realistic outputs that blend AI artistry with user-specific details. But its ability to replicate hidden or private features has triggered fears about how much data AI systems can access and how they use it.

Privacy, Data, and AI Ethics
Experts say that while AI does not “mystically” know hidden details, it may rely on vast amounts of user data collected across platforms. Professor Triveni Singh, cybercrime expert and former IPS officer, has previously warned that:
“AI tools may combine inputs from personal uploads, social media activity, and digital footprints to create outputs that appear eerily accurate. The challenge is not just technical—it is about transparency, consent, and data protection.”
With India’s Digital Personal Data Protection Act (DPDPA 2023) already in effect, the incident underscores the urgent need for clearer disclosures by AI platforms about what data they use and how it is processed.
A Growing Debate
For now, the Banana AI saree trend continues to thrive on Instagram, blending fashion, technology, and digital creativity. Yet, the viral controversy has reframed the conversation: Is the cost of hyper-realistic AI art a loss of privacy?
As more users come forward with similar experiences, regulators, technologists, and the public may be forced to grapple with the hidden trade-offs of AI-powered personalization.