The Ghibli trend has taken social media by storm, captivating millions and generating thousands of images inspired by the signature style of Studio Ghibli. In recent weeks, individuals across various platforms have flocked to OpenAI’s artificial intelligence (AI) chatbot, utilizing its image transformation capabilities to recreate their personal photos, memes, and historical scenes in a whimsical, hand-drawn aesthetic reminiscent of Hayao Miyazaki’s beloved films, such as Spirited Away and My Neighbour Totoro.
This sudden surge in engagement has significantly boosted the visibility of OpenAI’s AI chatbot. However, while users are eagerly uploading images, privacy and data security experts have raised alarms about the implications of this viral phenomenon. Users might be inadvertently allowing the company to train its AI models with their submitted images, which raises valid concerns regarding the handling of personal data.
Moreover, the permanence of facial data available online poses a significant privacy threat. In the wrong hands, such information could facilitate cybercrimes, including identity theft. As the trend has caught the attention of a global audience, it’s crucial to unpack the privacy risks associated with this burgeoning Ghibli art movement.
The Genesis and Rise of the Ghibli Trend
The Ghibli trend emerged following the introduction of a new image generation feature in ChatGPT at the end of March. Designed with enhancements from the improved GPT-4o AI model, this functionality was initially made available to paid users before being rolled out to free-tier subscribers a week later. While image generation had been possible through the DALL-E model, the updated capabilities allowed users to upload images, resulting in more vivid transformations and increased interaction with the platform.
The initial excitement among users quickly manifested in a surge of Ghibli-style art creation. Grant Slatton, a software engineer and AI enthusiast, is credited with significantly popularizing the trend. His viral post, featuring an image of himself, his wife, and their dog transformed into a Ghibli-inspired artwork, amassed over 52 million views, along with 16,000 bookmarks and 5,900 reposts as of this writing.
While exact numbers on users participating in the trend remain elusive, the overwhelming engagement on platforms such as X (formerly Twitter), Facebook, Instagram, and Reddit suggests that millions have tried their hand at creating Ghibli-style artworks. The trend also reached institutional levels, with brands and even government bodies, like India’s MyGovIndia account, joining in the fun by sharing their own Ghibli-style visuals. Notable figures, including celebrities like Sachin Tendulkar and Amitabh Bachchan, have also shared similar creations online.
Privacy and Data Security Concerns Behind the Ghibli Trend
This lack of transparency means that individuals creating Ghibli-style images may unwittingly share their data with OpenAI by default. Questions arise regarding the fate of this data once uploaded.
OpenAI’s support resources indicate that unless a user manually deletes a chat, the data remains on their servers indefinitely. Even upon deletion, complete removal can take as long as 30 days. During this time, the company may still utilize the data for model training (this applies to regular plans, not Teams, Enterprise, or Education subscriptions).
Technical Product Manager Ripudaman Sanger emphasized the challenges of reversing the training process for AI models. While attempts are made to declassify user data, the knowledge acquired from the data remains a part of the model’s parameters, complicating any effort to completely eliminate the influence of the original input.
Critics argue that the core issue lies in the opaque nature of data usage in AI systems. Many users have little understanding of the fate of their uploaded images. Pratim Mukherjee, Senior Director of Engineering at McAfee, noted that users are often left without options to delete their data. This scenario raises broader concerns regarding the control and consent users have over their personal information.
Moreover, in the event of a data breach, the consequences can be grave. The rise of deepfake technology presents new risks, allowing malicious actors to misuse facial data to create content that could damage reputations or enable identity theft.
The Consequences Could Be Long Lasting
While some may view the risk of a data breach as minimal, they overlook the enduring nature of personal facial data. Gagan Aggarwal, a researcher at CloudSEK, highlighted that unlike other forms of personally identifiable information (PII) or credit card information, which can be changed or replaced, facial imagery leaves a permanent digital footprint, leading to an irreversible loss of privacy.
Individuals affected by a future data breach could face security concerns long after their information is leaked. Modern open-source intelligence tools capable of conducting widespread facial recognition further exacerbate this issue. If compromised, the data could lead to significant risks for those who participated in the Ghibli trend.
The urgency for awareness around such data-sharing behaviors grows as more participants embrace cloud-based technologies. Recently, Google unveiled its Veo 3 video generation model, which can create ultra-realistic videos, potentially paving the way for similar trends that utilize personal data without adequate safeguards.
The intent here is not to instill fear, but to prompt awareness regarding the risks associated with seemingly innocuous online trends and casual data sharing with AI models. Increased knowledge could empower individuals to make more informed choices moving forward.
As Mukherjee concluded, users should not have to compromise their privacy for a moment of entertainment. Ensuring transparency, control, and security should form the foundation of user experiences with AI technologies. As the landscape evolves and new capabilities emerge, users must remain vigilant while engaging with these tools. The adage about fire being a helpful servant but a dangerous master applies equally to AI.