
Just because you can, should you? Rethinking how startups apply AI with purpose
A new report from MIT recently sent shockwaves through the AI world: 95% of enterprise generative AI pilots deliver zero return on investment. Companies are moving fast with AI; some argue too fast. A wave of products are rushing to bolt on generative models, often with the same pattern: “Look what it can do!” But rarely: “How is this improving customer experience?”
The result is a flood of features that feel impressive, but not always useful. Automated copy. AI avatars. One-click content generation. They work, technically, but just because something can be generated doesn’t mean it’s worth generating. And this is the harder question that startups need to start asking.
As founders building an AI-powered storytelling product for families, we’ve had to confront this head-on. The tech is seductive. It’s flexible, clever, and scalable. But real value doesn’t come from showing off what the model can do, it comes from deciding what shouldn’t be done by AI at all.
One of the biggest misconceptions is that AI saves time. That’s only true if you’re happy with generic output. If you want something that feels intentional, emotionally intelligent, and trustworthy, you don’t get to skip the work. You just shift it.
In our case, we weren’t just generating text and images. We were shaping story experiences for kids, not as an end in itself, but as a way to spark imagination, inspire creativity, and make storytelling a shared adventure between children and the people around them. That meant building systems to ensure character continuity, emotional tone, cultural sensitivity, and age appropriateness. It meant rejecting the fast path and designing real orchestration layers, prompt pipelines, visual consistency tools, and fallback systems, because when the stakes are about nurturing curiosity and joy, not just shipping content, cutting corners isn’t an option. It’s the invisible work that takes time, breaks often, and rarely gets credit, but makes the difference between a disposable output and a moment a child will remember.
What we’ve learned is that more automation doesn’t always mean better experiences. Sometimes, the right move is to slow down to put humans back in the loop, so the technology can better service the moment. Generative models are powerful, but they’re not mind readers. In Luna’s case, whilst providing the key to an experience simply not possible even a year ago, we found, without proper emotive guidance, they could miss the emotional nuance of a bedtime story, or remove the warmth a parent feels in a shared creative experience. Our considered goal for using GenAI as a silent creative enabler was to apply it tactically, and with carefully considered (and painstakingly enforced) guardrails. The payoff of a heavy foundational effort is a product that genuinely enhances the storytelling process without losing the human magic at its heart. We’re continuously blown away by the stories and illustrations Luna produces, and it’s all the more rewarding because we have the memories of the work it takes to get consistency and believability out of a technology that works by being unpredictable.
So, this piece isn’t a case against generative AI, but actually, quite the opposite. Luna and many considered applications like it are clear proof of its potential. But for those uninitiated, please don’t dismiss the danger in treating AI like a shortcut or a checklist item. That’s when you end up with apps that generate one hundred variations of something no one asked for, or flood users with content they don’t value.
Startups need a different mindset. Think less automation first and more intention first.
Ask:
- Is this solving a real need, or just showing off the tech?
- Is this helping the user feel more empowered, more creative, more connected?
- What happens if we do less, but do it with care?
Restraint is not a weakness. In AI product design, it’s a strategy. Knowing what not to automate (and why) is often more important than knowing what you can.
We’ve spent two years navigating these tensions. And if there’s one takeaway, it’s this: you can’t outsource purpose to the model. You still have to make the hard decisions, about experience, ethics, and identity. The best AI products aren’t just functional. They’re thoughtful.
Founders have a choice. We can build faster. Or we can build better.
The next generation of AI products won’t just impress. They’ll build trust, spark human creativity, and stand for something.
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.