Reconciling Conflicting Data: A Thinking Pattern
One of the most useful thinking patterns I’ve developed working with AI is what I call “both/and” synthesis. It’s simple to describe, surprisingly hard to practice, and it’s produced better strategic recommendations than any single data source could.
The pattern
When two data sources contradict each other, the instinct is to pick the one you trust more. Survey says X, landscape says Y. You weigh the methodologies, check the sample sizes, and declare a winner.
Instead, I ask: “What would it mean if BOTH data sources are valid? Could they be measuring different dimensions?”
The framework:
- What is each source actually measuring?
- Could these be different stages or dimensions of the same problem?
- Is there a frame where both are true simultaneously?
- What would a “both/and” approach look like?
Real example
We were building an AI entrepreneurship platform. Community data from online forums said users wanted same-industry peers in their cohort. People in food service wanted to learn from other food service entrepreneurs. The logic was obvious: shared context means better advice.
But program data from 14+ cohorts told a different story. Cross-industry cohorts consistently outperformed same-industry ones. Participants who couldn’t copy each other’s playbooks asked better questions and brought more creative solutions.
Then a user interview added a third data point. The participant described wanting people “doing the same thing,” but when pressed, what she actually meant was people at the same stage, not the same industry. She wanted peers who understood the feeling of figuring out pricing for the first time, not peers who also sold baked goods.
The resolution
Same-industry desire was about emotional recognition (“someone who gets it”). Cross-industry value was about cognitive diversity (“someone who sees it differently”). Both were true. They were measuring different dimensions of the same need.
The design implication: match by stage and challenge type, not by industry. Cohort members should recognize each other’s situations without being able to copy each other’s answers.
That insight shaped our cohort design and changed how we talked about the product. None of it would have surfaced if we’d just picked a winner between the two data sources.
When it doesn’t work
This pattern is useful when data sources are measuring different things. It’s less useful when:
- One source has clear methodological problems
- The data is about the same thing and genuinely contradicts
- You’re using it to avoid making a hard call
Knowing when a pattern applies (and when it doesn’t) is more valuable than the pattern itself.