Election Lessons: Statistics and Goal Setting
On the morning of election day, I emailed a friend:
"...[P]eople seem to be relying on these statistical tropes despite the massive early turnout, the pandemic effect, and Rs trying to [exclude certain votes], which, to me, make all the numbers suspect. We don't know whether the sampled population represents the overall population BECAUSE NO ONE HAS A CLUE WHAT THE ELECTORATE WILL ACTUALLY LOOK LIKE. That's like the first minute of the first day of Stats 101! Sorry, had to get that off my chest. :)"
The reason I needed to get that off my chest is that I’d witnessed an unending series of aggravated assaults on statistics while watching the news.
It seemed like a version of this conversation played out every five minutes:
Reporter: “We’ve never seen lines this long (or short) at this polling location. And as you know, the early vote is unprecedented. No one knows what will happen!”
Anchor: “I agree. ...Now let’s look at some polls that attempt to predict the future based on the past.”
Statistics nerd-dom aside, it made me reflect on what might have allowed such a disconnect between what reporters and political pundits understood about the unprecedented nature of the election and their reliance on precedent to opine on it.
My guess is that it comes down to something simple: No one likes to say, “I don’t know.”
It’s surely not great TV. “What’s going on, John?” “I have no earthly idea, Wolf. Back to you.”
But it’s not only on TV. No one likes to tell their boss that they’re just making things up as they go. Heck, it’s hard to admit to ourselves when we’re wandering in the dark. Uncertainty avoidance is human nature.
We thrive on plans, in part, because it feels good to have a story about the future that’s more certain than it really is. The Plan is a warm blanket.
But it’s worth remembering that a plan is ultimately just a guess. And that’s especially true in dynamic and unprecedented situations.
We might also take the election results as a lesson in how benchmarks, once set, can function as if they are written on tablets and handed down from God.
Once everyone agrees that the goal is to grow revenue by 15%, for example, it’s easy to judge everything that happens relative to that target—“The team is struggling because it’s only achieving 10% growth”—without revisiting whether or not the target itself is valid
Chris Hayes mentioned a related point about polling on the Ezra Klein podcast last week:
“...[I]f the measurement is wrong, the measurement failed. Which is to say, Sara Gideon did not have a ‘disappointing’ night last night in Maine. The polling in Maine had a terrible night. It is pretty clear that Sara Gideon was not going to win that race. ...I just think the polling was wrong, and it was wrong all along.”
The point is that good strategy requires continually pushing ourselves to scrutinize what we believe, since it’s not always natural. Practically, when reviewing strategy, that might look like explicitly asking:
Based on the evidence we’re seeing, what are the three most important assumptions that may not be right?
What are three things that are true today that were not the case the last time we planned?
The key there is not asking general questions (e.g., “Has anything changed?”), but pushing oneself or the team to find what’s changed (e.g., “What are three things…?”)