Why OpenStack should stop tracking its Net Promoter Score

TL;DR because it makes no sense for OpenStack and it provides too much distraction at a time when the Foundation should focus on addressing more specific and actionable concerns tied to its mission “to inform the roadmap based on user’s opinions and deployment choices.

I’d like the OpenStack community to focus on the fundamental issues of OpenStack adoption, ease and speed of development and ease of use by application developers. There is plenty of evidence that such areas need attention and there are already metrics tracking them: adding Net Promoter Score (NPS) is a distraction.

This is part of the reason why I’m running for Individual Director of the OpenStack Board. Read the rest of my candidacy pitch and hit me with any questions, comments or concerns you have over email, twitter, IRC (@reed), phone call, smoke signals … whatever works for you!

So, there are many strong reasons for removing the Net Promoter Score from the survey altogether. The main objection: NPS is used to track customer loyalty to a brand. For typical corporations, they can identify with precision customers and brand, therefore effectively measure such loyalty.

On the other hand, the OpenStack Foundation has many types of customers, the concept of its products is not exactly defined and ultimately OpenStack can’t be considered a brand in the sense of the original HBR article that launched NPS. The User Survey is answered by everyone from cloud operators or sys-admins to OpenStack upstream developers,  app developers / deployers and even a mystery blob called “Other” in a multiple choice answer. The question, “How likely are you to recommend OpenStack to a friend or colleague?” can be interpreted in too many ways to make any sense. Who exactly is the [Promoter|Detractor], who are their friends and colleagues, what exactly is “OpenStack” to them? Did they get some pieces (which ones?) of OpenStack from a vendor? Did they get the tarballs? Are they skilled to use whatever they have purchased or downloaded for free?

I’d argue that the NPS collected in the Survey in its current form has no value whatsoever.

 - Dilbert by Scott Adams
The OpenStack NPS went from #$ to *&

I asked the User Committee to answer these questions: Is the User Committee convinced that an open source project like OpenStack gets any value from tracking this score? Why exactly is that number tracked in the survey, what exactly does the UC want to get from that number, what actionable results are expected from it?

Stay tuned: I will update this blog post as the conversation continues.

Frank Day added to the list:

NPS is an industry standard measure.

He missed my point unfortunately. I didn’t start this conversation to debate if NPS is an industry standard measure. My question is whether it makes sense to track it for OpenStack when: a- OpenStack is not an industry but it’s an open source collaboration, it has no exact definition of its product (what is OpenStack in the context of the user survey? Only the core? the whole big tent? the tarball from upstream? a distribution from $VENDOR?) and finally, it lacks a definition of who its customers are (the survey has only a vague idea).

If we are to drop NPS, it should only be in exchange for some other measure of satisfaction.

He fails to explain why that should be important for a huge open source collaborative project like OpenStack. And also satisfaction of who, doing what is not clear.

Lauren Sell and Heidi Joy Tretheway gave a more thoughtful answer that I suggest to read in full. Some excerpts:

When we analyzed the latest user survey data, we looked at a demographic variable (user role, e.g. app developer), a firmographic variable (e.g. company size), and deployment stage. We learned that overall, there was no significant difference in NPS scores for people who identified with a specific function, such as app developers, or for companies by size. As a result, we didn’t do further data cuts on demographic/firmographic variables. We did learn that people with deployments in production tended to rate OpenStack more highly (NPS of 43 for production, vs 24 for dev/qa and 20 for POC).

One cause for variance is that unfortunately we’re not comparing apples to apples with the trending data in the latest survey report.

Going forward, I think we should focus on deployments as our trend line.

As a next stepthe independent analyst plans to draw up correlations (particularly for low scores) associated with particular technology decisions (e.g. projects or tools) and attitudinal data from the “your thoughts” section (e.g. we might find that firms that value X highly tend to rate OpenStack lowest).

I replied on the list saying that ultimately all this slicing and dicing is not going to tell us more than what we already know anecdotally and by looking at other data (that the community used to collect), suggesting that resources would better be allocated towards 1-1 interviews with survey respondents and other actions towards the community.

Roland Chan sent an update to the list after the meeting and the committee decided to keep analyzing this score. Heidi Joy will work with the data scientist, which means resources that could be better spent are being used to serve a corporate number.