Build-vs-buy for clinical AI: a framework
A decision framework for health tech CTOs evaluating whether to build internal AI capabilities or integrate third-party solutions. Spoiler: it depends on your data moat.
Every health tech CTO I talk to is facing the same question: should we build AI capabilities in-house or buy them from a vendor? The answer isn’t as simple as “build if it’s core, buy if it’s commodity.” In clinical AI, the lines are blurrier.
Here’s the framework I use with clients.
The three questions
Before diving into build-vs-buy, answer these honestly:
1. Is the AI a feature or the product?
If AI is one feature among many in your platform (e.g., auto-generating progress notes in a care management system), you have more flexibility. You can start with APIs, validate demand, and invest in building only when the economics justify it.
If AI is the product (e.g., a clinical decision support tool), you probably need to build. Your differentiation lives in the model, the training data, and the evaluation methodology. Outsourcing that is outsourcing your competitive advantage.
2. Do you have a data moat?
A data moat means you have proprietary data that makes your AI better than what a generic vendor can offer. In healthcare, this usually means:
- Thousands of labeled clinical interactions specific to your use case
- Domain-specific feedback loops (clinicians correcting AI output)
- Patient population data that skews differently from public training sets
If you have a data moat, building makes more sense — you can train or fine-tune models that a vendor can’t replicate. If you don’t, you’re just rebuilding what the vendor already has, slower and more expensively.
3. What’s your compliance posture?
Some organizations can use cloud AI APIs under a BAA. Others need everything on-premise. Your compliance requirements constrain your options before you even start evaluating vendors.
The decision matrix
| Scenario | Recommendation |
|---|---|
| AI is a feature + no data moat + cloud OK | Buy. Use vendor APIs. Invest your engineering time in the product, not the model. |
| AI is a feature + data moat + cloud OK | Hybrid. Start with vendor APIs, build evaluation harnesses, plan a migration path to self-hosted when volume justifies it. |
| AI is the product + data moat | Build. This is your competitive advantage. Invest in ML engineering, evaluation infrastructure, and a data flywheel. |
| Any scenario + strict on-premise requirement | Build or self-host. Vendor options are limited. Budget for infrastructure engineering. |
Common mistakes
Mistake 1: Building too early
I’ve seen teams spend 6 months building a custom NLP pipeline when the OpenAI API would have gotten them to market in 6 weeks. The custom solution was 15% better on benchmarks and 6 months late.
Ship first. Optimize later. The feedback from real users is worth more than marginal accuracy improvements.
Mistake 2: Buying too long
The flip side: teams that stay on vendor APIs past the point where the economics make sense. At scale, API costs compound. A team paying $15K/month for API calls could self-host for $6K/month — but only if they have the engineering capacity to manage it.
Run the cost model quarterly. The crossover point sneaks up on you.
Mistake 3: Ignoring the evaluation problem
Whether you build or buy, you need to measure quality. Most teams don’t invest in evaluation infrastructure early enough, and end up making build-vs-buy decisions based on vibes instead of data.
Before you build anything, build the test. A set of 100-200 labeled examples with expected outputs. Run every candidate (vendor or homegrown) against it. Make decisions with numbers.
My recommendation for most health tech teams
Start with buy. The vendor ecosystem in 2026 is mature enough that you can ship meaningful AI features in weeks, not months. Use that speed to validate product-market fit and build your data moat through real user interactions.
Plan for build. Design your architecture with a migration path. Abstract your AI calls behind an internal service boundary so swapping from vendor API to self-hosted model doesn’t require rewriting your application.
Invest in evaluation from day one. This is the one thing you should always build, regardless of where the model runs. Your evaluation suite is how you know when it’s time to switch.