That Quote That's Suddenly Everywhere
"AI is an insult to life itself."
I only stumbled across this quote a couple of months ago, attributed to Hayao Miyazaki - the man behind the famed Studio Ghibli. Since then, I've noticed it's literally everywhere, particularly as we speak, there's a massive trend of people using ChatGPT and other image generators to create pictures in the distinctive Ghibli aesthetic. My social feeds are flooded with AI-crafted images of ordinary scenes transformed with that unmistakable Ghibli magic - the soft lighting, the whimsical elements, the characteristic designs, the stuff of my childhood. You know what, I am not good with this kind of words. Here is one from your truly.
The trend has brought Mr Miyazaki's quote back into the spotlight, wielded as a battle cry against this specific type of AI creation. There's just one small problem - this quote is being horribly misused.
I got curious about the context (because apparently I have nothing better to do than fact-check AI quotes when I should be testing that MCP server), so I dug deeper. Turns out, Mr Miyazaki wasn't condemning all artificial intelligence. He was reacting to a specific demonstration in 2016 where researchers showed him an AI program that had created a disturbing, headless humanoid figure that crawled across the ground like something straight out of a horror movie. For God's sake, the animation reminded him of a disabled friend who couldn't move freely. Yeah, I quite agree, that was the stuff of visceral nightmare.
![]() |
Src: https://www.youtube.com/watch?v=ngZ0K3lWKRc&t=3s |
It's also worth noting that the AI Mr Miyazaki was shown in 2016 was primitive compared to today's models like ChatGPT, Claude, or Midjourney. We have no idea how he might react to the current generation of AI systems that can create stunningly convincing Ghibli-style imagery. His reaction to that zombie-like figure doesn't necessarily tell us what he'd think about today's much more advanced and coherent AI creations. Yet the quote lives on, stripped of this crucial context, repurposed as a condescending umbrella on all generative AI.
The Eerie Valley of AI Art
Here's where it gets complicated for me. When I look at these AI-generated Ghibli scenes, they instantly evoke powerful emotions - nostalgia, wonder, warmth - all the feelings I've associated with films like "Spirited Away" or "Princess Mononoke" over years of watching them (for what it's worth, not a big fan of Totoro, it's ok). The visual language of Ghibli taps directly into something deep and meaningful in my experience.
That is what art does. That is what magic does. But this is not, isn't it? These mass-produced imitations feel like they're borrowing those emotions without earning them. I feel an unsettling hollowness to the "art" - like hearing your mother's voice coming from a stranger. The signal is correct, but the source feels wrong.
I'm confronted with a puzzling contradiction: if a human artist were to draw in the Ghibli style (and many talented illustrators do), I wouldn't feel nearly the same unease. Fan art is celebrated, artistic influence is natural, and learning by imitation is as old as art itself. So why does the AI version feel different?
The Other Side
As a human, I consume AI features. Yet as an engineer, I build AI features. And there is another side to this.
You probably have heard it. Every time on the Internet there is a complaint that the US' gun culture is a nut case, there is a voice from a dark and forgotten 4chan corner screaming back "A gun is just a tool. A gun does not kill people. People do. And if you take the gun away, they will find something else anyway".
But this argument increasingly fails to capture the reality of AI image generators. These systems aren't neutral tools - they've been trained on massive datasets of human art, often without explicit permission from the artists. When I prompt an AI to create "art in Ghibli style," I'm not merely using a neutral tool - I'm activating a complex system that has analyzed and learned from thousands of frames created by Studio Ghibli artists.
This is fundamentally different from a human artist studying and being influenced by Mr Miyazaki's work. The human artist brings their own lived experience, makes conscious choices about what to incorporate, and adds their unique perspective. The AI system statistically aggregates patterns at a scale no human could match, without discernment, attribution, or compensation.
I've built enough software systems to know that complexity breeds emergence. When algorithms make thousands or millions of decisions across vast datasets, the traditional model of direct human control becomes more of a fiction than a reality. You can't just look at the code and know exactly what it will do in every situation. Trust me, I've tried to "study" deep learning.
Perhaps most significantly, as these systems advance, the distance between the creator's intentions and the system's outputs grows. The developers at OpenAI didn't specifically write code that says "here's exactly how to draw a flattering image of a dude taking note on a motorcycle" - they created a system that learned from millions of images, and now it can generate Ghibli-style art that no human specifically programmed it to make. These AI systems develop abilities their creators didn't directly put there and often can't fully predict. This expanding gap between intention and outcome makes the "tools are neutral" argument increasingly unsatisfying.
This isn't to say humans have lost control entirely. Through system design, regulation, and deployment choices, we retain significant influence. But the "tools are neutral" framing no longer adequately captures the complex, bidirectional relationship between humans and increasingly sophisticated AI.
Why We Can't Resist Oversimplification
So far, there are two camps. The "AI is an insult" reflects people whose work and lives are negatively impacted. And the "Tools are neutral" defends AI creators. I tried, but I am sure I have done a less-than-stellar job capturing the thought processes from both camps. Still, as poorly as it is, it feels fairly complex. This complexity is exactly why we humans lean toward simplified narratives like "AI is an insult to life itself" or "It's just a tool like any other." The reality is messy, contradictory, and doesn't fit neatly into either camp.
Humans are notoriously lazy thinkers. I know I am. Give me a simple explanation over a complex one any day of the week. My brain has enough to worry about with keeping our production systems alive. I've reached the point where I celebrate every morning that the #system-alert-critical channel has no new message.
This pattern repeats throughout history. Complex truths get routinely reduced to easily digestible (and often wrong) summaries. Darwin's nuanced theory of evolution became "survival of the fittest." Einstein's revolutionary relativity equations became "everything is relative." Nietzsche's exploration of morality became "God is dead." In each case, profound ideas were flattened into bumper sticker slogans that lost the original meaning. Make good YouTube thumbnails though.
This happens because complexity requires effort. Our brains, evolved for quick decision-making in simpler environments (like not getting eaten by tigers), naturally gravitate toward cognitive shortcuts. A single authoritative quote from someone like Mr Miyazaki provides an easy way to validate existing beliefs without engaging with the messier reality.
There's also power in simple narratives. "AI threatens human creativity" creates a clear villain and a straightforward moral framework. It's far more emotionally satisfying than grappling with the ambiguous benefits and risks of a transformative technology. I get it - it's much easier to be either terrified of AI or blindly optimistic about it than to sit with the uncertainty.
I am afraid that in the coming weeks and months, we cannot afford such simplification.
The choice we have to make
Young technologists today (myself included) find ourselves in an extraordinary position. We're both consumers of AI tools created by others and creators of systems that will be used by countless others. We stand at the edge of perhaps the most transformative wave of innovation in human history, with the collective power to influence how this technology shapes our future and the future of our children. FWIW, I don't have a child yet, I like to think I will.
The questions raised by AI-generated Ghibli art - about originality, attribution, the value of human craft, the economics of creation - aren't going away. They'll only become more urgent as these systems improve and proliferate.
The longer I work in tech, the more I realize that the most important innovations aren't purely technical - they're sociotechnical. Building AI systems that benefit humanity requires more than clever algorithms; it requires thoughtful consideration of how these systems integrate with human values and creative traditions.
For those of us in this pivotal position, neither absolute rejection nor blind embrace provides adequate guidance. We will need to navigate through this, hopefully with better clarity than Christopher Columbus when he "lost" his way to discovering America. My CEO made me read AI-2027 - there is a scenario where humans fail to align AI superintelligence and get wiped out. Brave new world.
1. Embrace Intentional Design and Shared Responsibility
We need to be deliberate about what values and constraints we build into creative AI systems, considering not just what they can do, but what they should do. This might mean designing systems that explicitly credit their influences, or that direct compensation to original creators whose styles have been learned.
When my team started writing our first agent, we entirely focused on what was technically possible. Is this an agent or a workflow? Is this a tool call or a node in the graph? Long context or knowledge base? I know, technical gibberish. The point is, we will soon evolve past that learning curve, and what comes next is thinking through the implications.
2. Prioritize Augmentation Over Replacement
The most valuable AI applications enhance human creativity rather than simply mimicking or replacing it. We should seek opportunities to create tools that make artists more capable, not less necessary.
When I see the flood of AI-generated Ghibli art, I wonder if we're asking the right questions. The most exciting creative AI tools don't just imitate existing styles - they help artists discover new possibilities they wouldn't have found otherwise. The difference between a tool that helps you create and one that creates instead of you may seem subtle, but it's profound.
I have been lucky enough to be part of meetings where the goal of AI agents is to free the human colleagues from boring, repetitive tasks. I sure hope that trajectory continues. Technology should serve human values, not the other way around.
3. Ensure Diverse Perspectives and Continuous Assessment
The perspectives that inform both the creation and governance of AI systems should reflect the diversity of populations affected by them. This is especially true for creative AI, where cultural context and artistic traditions vary enormously across different communities.
It's so easy to build for people in my immediate circle and call it a day. As an Asian, I see how AI systems trained predominantly on Western datasets create a distorted view of creativity and culture. Without clarification, a genAI model would assume I am a white male living in the US. Bias, prejudice, stereotype. We have seen this.
Finding My Way in the AI Landscape
The reality of our relationship with AI is beyond simple characterization. It is neither an existential threat to human creativity nor a neutral tool entirely under our control. It represents something new - a technology with growing capabilities that both reflects and reshapes our creative traditions.
Those Ghibli-style images generated by AI leave me with mixed feelings that I'm still sorting through. On one hand, I'm amazed by the technical achievement and can't deny the genuine emotions they evoke. On the other hand, I feel I am being conditioned to feel that way.
Perhaps this ambivalence is exactly where we need to be right now - neither rejecting the technology outright nor embracing it uncritically, but sitting with the discomfort of its complexity while we figure out how to move forward thoughtfully.
For our generation that will guide AI's development, the challenge is to move beyond reductive arguments. Neither blind techno-optimism nor reflexive technophobia will serve us well. Instead, we need the wisdom to recognize both the extraordinary potential and the legitimate concerns about these systems, and the courage to chart a course that honors what makes human creativity valuable in the first place.
This post was written with assistance from Claude, which felt a bit meta given the topic. They suggested a lot of the structure, but all the half-baked jokes and questionable analogies are mine alone. And it still took me a beautiful Saturday to pull everything together in my style.
No comments:
Post a Comment