Google Gemini’s “Nano Banana AI Saree” Trend Sparks Privacy and Ethics Concerns
Mumbai| October 5, 2025 — A viral social media trend using Google Gemini’s “Nano Banana AI Saree” filter has quickly turned controversial. The AI-powered tool, which allows users to transform selfies into stylized retro portraits draped in traditional Indian sarees, is facing criticism after multiple users reported distorted, unsettling, and overly manipulated images generated by the platform.

AI Craze Turns Controversial
The Nano Banana AI Saree trend began as an aesthetic experiment on social platforms like Instagram, X (formerly Twitter), and Threads, where users showcased AI-edited portraits that gave them a cinematic, vintage look. Within days, the trend became a sensation, attracting millions of views under hashtags such as #NanoBananaAISaree and #GeminiAIEdit.
However, the viral excitement soon turned into discomfort. Several users reported “creepy” image alterations, including mismatched facial features, exaggerated smiles, and noticeable skin tone changes. Some users alleged that the AI filter altered their body proportions or generated hyper-realistic but unrecognizable versions of their faces — leading to concerns over AI bias, data safety, and digital consent.
User Complaints and Reactions
Social media users across India and Southeast Asia have shared screenshots of their flawed AI edits. A Reddit thread titled “Gemini turned my face into someone else’s” received thousands of comments within hours. One user posted,
“It looked artistic at first, but the AI made my eyes look alive in a very unnatural way. It was unsettling.”
Influencers who initially promoted the filter for engagement have since deleted their posts or issued warnings. Some even expressed regret, noting that they had not reviewed the app’s data permissions before uploading their selfies.
Google has not released an official statement regarding the complaints. However, early reports suggest that the “Nano Banana” feature may have originated as an experimental visual stylization project within Gemini’s creative suite and was not intended for public mass use.
Experts Warn of Ethical and Privacy Risks
Cybersecurity and AI ethics experts have raised alarms about the privacy implications of such viral tools.
Dr. Riya Deshmukh, an AI researcher based in Mumbai, commented:
“Users often underestimate what happens once their images are uploaded. AI models like Gemini’s stylization tool may store or reuse facial data for training, and the terms of consent are often unclear.”
Analysts also caution that AI-generated filters can unintentionally reinforce beauty stereotypes or create manipulated representations that distort cultural identity — particularly when applied to traditional attire like sarees.
Virality and Algorithmic Push
The trend’s popularity surged through short-form video platforms where creators showcased quick transformations. Within a week, the hashtag #NanoBananaAISaree surpassed 1.8 million views globally.
Digital marketing experts note that AI-enhanced trends are now being algorithmically boosted, as platforms reward high engagement from “before-and-after” visual content.
However, the same virality also accelerates the spread of unsafe or misleading AI tools — especially those hosted on third-party, non-verified platforms.
The Bigger Picture
The Google Gemini Nano Banana AI Saree episode highlights a broader issue facing today’s digital ecosystem — the fine line between creative innovation and ethical responsibility. As generative AI becomes more accessible, experts urge both developers and users to establish stronger consent frameworks and data transparency norms.
For Indian users, where traditional imagery carries deep cultural meaning, the misuse of such tools could have far-reaching implications — from deepfake risks to identity manipulation in the age of generative media.
Key Takeaway
While AI-based creativity is redefining digital expression, the Nano Banana Saree trend serves as a cautionary tale: innovation without regulation can blur the line between art and intrusion. Until formal guidelines are issued, experts recommend avoiding experimental filters, verifying app authenticity, and reading all data consent policies before sharing personal images online.
