Real Data v. Fake Data
I was recently in a meeting with a team in the process of setting its objectives and key results for the next quarter. When one of the projects—a proposed “deep dive”—came up for discussion, the team leader asked, “Should we do a deep dive or an in-market test?”
My ears perked up.
The team’s first goal was to understand the proposed project, but that question prompted a much more important question: Do we have real data?
In this case, a “deep dive” would entail analysts doing research and crunching numbers to come up with a business recommendation. In other contexts, such a research project might include steps like:
We brought the team together to brainstorm and prioritize the best ideas.
We did a survey of potential customers and asked what they wanted.
We created a clickable mockup of the product/service experience to see how much people liked it.
We did a conjoint analysis to estimate how much they would be willing to pay for each feature.
All of that activity can look and sound rigorous, and it can fool people into thinking it’s more grounded than it is. All of those approaches are fine, but they generate data that is less reliable than data that would come from literally offering the product—or several versions of the product—and seeing how people react to it.
The risk comes when you forget how not real the data from those research activities is. It’s easy to accidentally build strategies on fake data—anecdotes, insiders’ perspectives, and seemingly well-reasoned arguments that are ultimately just made up—and then to have unearned confidence in those strategies.
That’s why the question “Should we do a deep dive or an in-market test?” is so useful. For the team I worked with, it forced them to reckon with whether they had enough legitimate data to make judgments, or whether they needed to learn more.