Whenever we attribute meaning to the results of a data project, we are interpreting those results. We’re using what we know about the data, the analysis, the project as a whole and all kinds of preexisting knowledge, opinions and worldviews to say, “ah, if that is the result of this analysis, then that means…”. 

 

This isn’t unique to data science, it is the foundation of all science from deciphering the results of a supercollider to humans figuring out that if you strike certain rocks together, you can make sparks. Have question > gather data > process data > interpret results. 

 

Sometimes our interpretations are correct, but sometimes they aren’t. Sometimes we don’t have enough data, or we have flawed data, or our methodology can’t give us the answer we want, and sometimes our biases, preconceived notions, and prejudices keep us from getting to the correct interpretation. 

 

The possibility of a flawed interpretation is causing a lot of problems for data science today. If an interpretation can be flawed, it can’t be automatically trusted. If your data results are ‘open to interpretation’, what good are they? The desire to have your data project taken seriously, be believed, and not be dismissed leads data producers to bury the very notion of interpretation. ‘This is a fact!’ proclaim the scientists. ‘That’s your opinion!’, yells the distrustful public, who are fed up with being given contradictory interpretations in the guise of inarguable truth. Who is right?

 

Well, there’s a middle ground between objective truth and subjective opinion that relates to the constructed nature of knowledge. Rather than proclaiming your interpretation as fact, be transparent that it is an interpretation and support it. Refusing to explain yourself and refusing to admit that the interpretation step of the Data Equity Framework actually happens can stem from a variety of reasons. The most sympathetic is fear; being afraid that if you admit any uncertainty it will be all too easy to dismiss the information you’ve worked so hard to bring to the table. The worst is arrogance; that your interpretation is infallible and that no one else deserves to know how you arrived at it. The most common, we think, is nervousness; we think many people feel like they don’t know how to defend their interpretations without weakening their report. 

 

We All Count knows that interpretation’s apparent weakness – that it isn’t self-supporting – is actually its greatest strength. Through the act of supporting your interpretation to your audience, you can share your worldview, bring people around to your point in a meaningful way, and earn trust in this and future data products. Not to mention prevent the further erosion of public trust in data science. How?

The Four Pillars of Interpretation Support

In order for people to trust your interpretation, they are going to want to know why you are asking the question in the first place, and what factors may have led you to this particular interpretation. Whether or not your motivations are appropriately reflected in the interpretation is not for you to decide, but rather your audience. If you are a company that stands to directly gain if the analysis results are interpreted this way, it doesn’t automatically mean it’s wrong, but it may be grounds for a close look at how you acknowledged this motivation and what safeguards were used to mitigate its influence. Your motivations are not weaknesses if you are upfront about them. 

 

Use a tool like a Motivation Touchstone to disclose the primary motivation, restrictions and rewards that you faced when interpreting the data results. People are looking for this information (or assuming the worst) anyway, and you’ve already jumped the first hurdle of having your results dismissed because of suspected bias. You want to say “we recognize these potential sources of influence and here’s what we did about them”. And if you think you don’t have them you do. And if you didn’t deal with them, they almost certainly affected your interpretation and people are right to challenge its validity.

This is the most traditional pillar of supporting evidence for your conclusions. It’s usually pretty easy for data scientists to get on board with this one, after all, it can be summarized in the familiar adage: show your work. 

 

To support your interpretation, you need to answer some basic questions:

 

  • What do you want to know?

 

Getting specific about the questions you are trying to answer allows your audience to see if the scope and subject of your final interpretation match the design of your initial questions. 

 

  • What data did you put into the project? (This is where a Data Biography is going to make this a breeze)

 

Where, how and from whom you got your data is going to be super relevant in supporting your interpretation. Also, what data you collected demonstrates your worldview; the collection of assumptions, knowledge and perspective that informed where you looked for relevant factors to your question. If you are studying illness but your worldview doesn’t include Germ Theory, you’re going to be testing a lot of bloodletting and leeches and maybe not looking in the right place. If you are studying poverty, but aren’t collecting the same detailed data on the rich that you are on the poor, your worldview has been baked right into what you are able to interpret. 

 

  • What method did you use?

 

Now this one is tricky. Obviously, this is the core of ‘show your work’. You need to answer questions like: Can this methodology even answer your specific question (looking at you, often-inappropriate Randomized Control Trials!)? How does the methodology answer your question? What parts of the methodology are open to debate (all of them have strengths and weaknesses)? etc. 

 

The tricky part here is how much data literacy we expect our audiences to have. It’s not equitable to require a stats degree to understand the basic tenets of your process, but it’s also not okay to leave out this information for those who can/want to try to understand and even critique. Data literacy will only increase among our audiences if we both offer the information (giving them the incentive and opportunity to learn and apply data literacy skills) and meet them halfway (taking the time to defend our methodology in a clear, simple, reduced-barrier way). 

 

  • Certainty

Okay, so here’s the big one: talking about uncertainty. If you’re not willing to admit uncertainty, why should anyone listen to you? Talk about the point estimates, the confidence intervals and the statistical significance with confidence and vulnerability. Being vulnerable about what you know, what you don’t, and the probabilities of both is the essence of statistical science. Uncertainty is only a weakness if you let it be one.

Being open about the interpretation process is the fastest way to get people invested in your outcomes. First of all, do you know how you arrived at this interpretation? If this is a black-box algorithm or some machine learning process that gives you results and you just cross your fingers and guess at its meaning, then even you shouldn’t trust your interpretation. If, however, you are like the majority of projects you went through a specific interpretation phase. 

 

When you got the results of your analysis, how did you decide what it means? If you decided in advance that a certain result would mean this or another result would mean that, you interpreted in advance. Make this a conscious step in your process and talk about it. Be transparent about other valid interpretations you considered, break down other invalid interpretations of these results and why you think they are invalid.

 

Talk about what assumed facts and perspectives are informing your interpretation beyond what you could measure with your method. If I want to find out how well a fertilizer works on plant growth, and I control for light exposure, it reflects a worldview where light is related to plant growth. This worldview may be based on other, well-supported interpretations of other previous experiments, it’s also relevant to supporting my current interpretation. 

 

You doubly increase the equity of your project by being transparent about the interpretation, first because you give people an opportunity to see how you are thinking and second because it reveals instances where your perspective influenced the interpretation. Remember, your worldview affecting the interpretation is inescapable and is only a problem if it is hidden. People won’t trust your results unless they can see how you bridged the transition from numbers to meaning.

If you really want someone to accept your interpretation, arrive there together. Participatory interpretation is the antidote to many equity issues in interpretation. When you have the results of your analysis, don’t interpret it alone! Gather the stakeholders in your project, lead them through the steps that led you up to here, present them with the results and apply meaning together. This is genuinely difficult for a variety of reasons, but getting to say ‘This is happening in your community, why do you think that is?’ will not only give you an equity payoff but open you up to interpretations you couldn’t even have imagined. 

If you can’t get your stakeholders involved in the initial interpreting, at least make sure there is a meaningful mechanism for feedback. Invite alternate interpretations as well as criticisms of yours. It will give you either the opportunity to defend your interpretation and win someone over or better yet, improve your interpretation from the benefit of their perspective. To not be receptive to feedback is either fear or arrogance and is the complete opposite of good science. 

How we talk about an interpretation matters to its reception. If you believe in education, constructed knowledge, and human progress, you’ll know that it’s not enough to be ‘right’. For people to accept a fact, it needs to fit into their brains somewhere. Supported interpretations are flexible and robust. They have more than one pillar of support so that they can survive shifting understanding and new information discoveries. They come with their own clear chain of construction (what we’ve been doing so far in this article) that gives them a memorable spot to live in. This spot in your brain is perfectly shaped by the surrounding methods, facts, and perspectives that hold it snugly in place. Without the pillars, your ‘fact’ lies crumpled in a pile of other unsupported and unsupportable conclusions and soon, you can’t tell the difference between any of them.