Photo by Joshua Tsu on Unsplash
A few weeks ago, we got selected past the first filtration stage of a programme I’ve been keeping an eye on for a while. Cue the cheers, the Discord messages, the joy of being in the running. You know the feeling.
Then, about as quickly as the excitement arrived, it was gone. The acceptance had been done in error, their AI-assisted filtration had misfired, flagging us as a fit when we weren’t. The human team caught it and reached out to let us know.
We took it on the chin. It happens. And honestly, we found it more amusing than anything else, given that we’re building an AI product ourselves. These things happen, and we appreciated the transparency.
But here’s where it gets genuinely interesting. When I asked for the reasoning behind the withdrawal, I got two clear answers. First: geography, the programme is rooted in the Mediterranean and focused on southern European startups, and we didn’t fit that scope. Fair enough, no argument there. Second: product defensibility, essentially, in a world where AI is evolving this fast, how do you protect an AI-based platform?
That second reason stuck with me. Not because it stung, but because it’s a question worth answering properly, and openly.
Everyone Is Asking This Right Now
Let’s step back from our specific situation for a moment, because this isn’t just about Edventures. This is a question that every AI startup is facing right now, whether from investors, accelerators, partners, or even potential customers.
And the concern is legit. Frontier models are improving at a pace that would have seemed implausible just two or three years ago. GPT-5, Gemini, Claude, Mistral — these tools are becoming more capable and more accessible by the month. So when someone sees a new AI product, the natural first reaction is: “Why can’t I just do this with ChatGPT, Gemini, or Claude?”
It’s a fair question. And if your honest answer doesn’t go deeper than “well, we basically wrap the API and add a nice interface,” then the scepticism is warranted. But if your answer is more considered than that, if you’ve built something with layers of reasoning behind it, then the question becomes an opportunity, not a threat.
Every AI founder right now needs to do this groundwork. Not just to satisfy investors, but to genuinely understand what they’re building and why it matters. The founders who can answer this clearly will build more focused products, make better roadmap decisions, and ultimately create things that are harder to replicate. The ones who can’t answer it probably have something worth worrying about. It’s not an easy question to answer, but it’s a necessary one.
So here’s how we think about it at Edventures.
The Framing Mistake Most People Make
The most common version of the defensibility question assumes that the AI model is the product. Under that assumption, yes — you’re in trouble. Because if your entire value proposition is “we prompt GPT cleverly,” then anyone with an API key and an afternoon can catch up. Prompt engineering is a skill, not a moat. It stopped being a differentiator years ago.
But that framing misses something important: in the best AI products, the model is an ingredient, not the meal. The defensibility lives in the layers around and beneath it — the domain knowledge baked into the product, the workflows that took years of real-world research to design, the proprietary data that makes the model sharper over time, and the distribution infrastructure that gets you close to users no one else is talking to.
That’s the distinction worth exploring. And it applies well beyond Edventures, it’s the lens through which any serious AI product should be evaluated.
Our Layer One: The Application Is the First Moat
At Edventures, our first layer of defensibility has nothing to do with which model powers Anna. It’s the application layer itself: the purpose-built coaching workflows, structured next-step guidance, accountability check-ins, progress roadmapping, and persistent memory that together form how Anna actually coaches and supports a founder. And underpinning all of it is the domain expertise that shaped every design decision.
Generic AI answers questions. They’re reactive. Anna guides execution. She’s proactive. That’s a structural difference.
I’ve spent 12 years involved in various entrepreneurship ecosystems, and over the past 7 years coached over 100 first-time founders across Europe, India, and Southeast Asia. One pattern emerged with striking consistency: founders don’t fail because they lack information. They fail because they lack clarity on what to do next, objective feedback on whether they’re on the right track, and accountability to keep them moving forward. They’re not ignorant — they’re unsure and need more confirmation than a more seasoned entrepreneur does. And a reactive chatbot, however capable, won’t follow up with you on Monday to ask whether you spoke to your first potential customer. Anna does.
The application layer is our first line of defense, regardless of which model sits beneath it. Swap the LLM entirely, and our proprietary workflow, memory, and execution logic remain because they were designed around a decade of lived coaching experience, not around any particular technology. That’s a structural advantage that took years of field research, pilot testing, and iteration to build, and it isn’t something a competitor can spin up overnight.
You can copy a feature. You can’t copy the decade of coaching insight embedded in how a product thinks.
Our Layer Two: We Are Training Our Own Model
The second layer is where our long-term moat starts compounding, and it’s already in motion.
A few months ago we secured EU high-performance computing (EUHPC) resources through RISE, the Swedish research institute, and are actively testing and training a domain-specific LLM for entrepreneurship coaching, currently narrowed to Gemma and Mistral as base models. When this model is trained on real founder conversations on the specific decisions, blockers, and breakthroughs of the early founder journey, it will outperform general-purpose AI on our use case in ways that are genuinely hard to replicate from outside.
Even more critically, we own the model and its weights. That means full control over deployment, costs, performance, and IP. As more founders use Edventures, more proprietary founder data trains the model. The better the model gets, the better Anna coaches. That’s a compounding moat, and it accelerates with every interaction, something a general-purpose model trained on everything will never achieve in our specific domain. It’s a long-term advantage that we’ve started to move towards.
Having your own model matters commercially too. Owning the model means we’re not dependent on any third-party roadmap, pricing decision, or API change. That independence is its own form of defensibility, and it also allows us to iterate faster and more effectively than a general-purpose model. Lastly, it also enables us to be more flecxible on pricing, as we have a great control of our costs.
Our Layer Three: The Data No One Else Has
The third layer is the longest-term one, and in many ways the quietest, but it’s the one that makes everything else sharper over time.
As more founders use Edventures, Anna accumulates something that no general-purpose AI is designed to build: a deep, contextualised understanding of how real founders actually think, get stuck, and move forward. Not hypothetically — but through real conversations, real decisions, and real moments of uncertainty at every stage of the journey. Over time, this shapes a data layer that reflects the genuine texture of early-stage entrepreneurship in a way that no dataset scraped from the open web ever could.
This feeds back into both the application layer and the model. It helps us understand where founders consistently lose momentum, which types of guidance actually drive action, and how coaching needs to adapt across different markets, cultures, and starting points. It’s also what makes localisation meaningful rather than superficial — not just translating language, but understanding that the advice relevant to a founder in Lagos is genuinely different from what’s useful to one in Stockholm or Bangalore.
The longer this runs, the more refined it becomes. And that kind of depth takes time to build. That’s exactly why we’re focused on getting close to real founders now, rather than later.
The Playbook, Laid Out Plainly
This is how we think about defensibility, and honestly, we think it’s a useful lens for any AI startup wrestling with the same question:
- Build around a specific problem, not a general capability. The more precisely your product is designed around a real, recurring user need, the harder it is to replicate with a general-purpose model. Depth of focus is a competitive advantage in itself.
- Design the application layer to do what AI alone cannot. Workflows, memory, accountability structures, and interaction logic that are purpose-built for your domain will hold their value regardless of which model sits beneath them. The model is an ingredient; the product is everything around it.
- Develop a model that gets sharper with use. General-purpose models are built for breadth. If your use case has enough nuance, training or fine-tuning on domain-specific data will outperform a generic model in ways that matter to your users and are difficult to close from the outside.
- Get close to the users others overlook. Serving a specific, underserved audience builds the kind of contextual understanding and community trust that broad platforms are rarely incentivised to develop. Proximity to your users generates data, insight, and loyalty that no compute budget can buy.
The Broader Point
We said at the top that this question applies to every AI startup, and we mean it. The founders who answer it well tend to have three things in common. They have deep, hard-won domain expertise. They’re building proprietary data loops that improve with use. And they’ve found a user group that the incumbents are too broad or too busy to serve well.
The “why isn’t this just ChatGPT?” question, framed correctly, is one of the most clarifying questions an early-stage AI founder can sit with. It forces you to articulate what you’re actually building beneath the surface. If your answer is crisp and layered, it gives you conviction. If it isn’t, it gives you a roadmap.
Either way, you come out ahead for having asked it.
We’d love to know your thoughts on this. If you’re an AI founder wrestling with the same question, or an investor who thinks about defensibility differently, reach out to me here — these are exactly the kinds of conversations I find most useful.