Why OpenStack should stop tracking its Net Promoter Score

TL;DR because it makes no sense for OpenStack and it provides too much distraction at a time when the Foundation should focus on addressing more specific and actionable concerns tied to its mission “to inform the roadmap based on user’s opinions and deployment choices.

I’d like the OpenStack community to focus on the fundamental issues of OpenStack adoption, ease and speed of development and ease of use by application developers. There is plenty of evidence that such areas need attention and there are already metrics tracking them: adding Net Promoter Score (NPS) is a distraction.

This is part of the reason why I’m running for Individual Director of the OpenStack Board. Read the rest of my candidacy pitch and hit me with any questions, comments or concerns you have over email, twitter, IRC (@reed), phone call, smoke signals … whatever works for you!

So, there are many strong reasons for removing the Net Promoter Score from the survey altogether. The main objection: NPS is used to track customer loyalty to a brand. For typical corporations, they can identify with precision customers and brand, therefore effectively measure such loyalty.

On the other hand, the OpenStack Foundation has many types of customers, the concept of its products is not exactly defined and ultimately OpenStack can’t be considered a brand in the sense of the original HBR article that launched NPS. The User Survey is answered by everyone from cloud operators or sys-admins to OpenStack upstream developers,  app developers / deployers and even a mystery blob called “Other” in a multiple choice answer. The question, “How likely are you to recommend OpenStack to a friend or colleague?” can be interpreted in too many ways to make any sense. Who exactly is the [Promoter|Detractor], who are their friends and colleagues, what exactly is “OpenStack” to them? Did they get some pieces (which ones?) of OpenStack from a vendor? Did they get the tarballs? Are they skilled to use whatever they have purchased or downloaded for free?

I’d argue that the NPS collected in the Survey in its current form has no value whatsoever.

 - Dilbert by Scott Adams
The OpenStack NPS went from #$ to *&

I asked the User Committee to answer these questions: Is the User Committee convinced that an open source project like OpenStack gets any value from tracking this score? Why exactly is that number tracked in the survey, what exactly does the UC want to get from that number, what actionable results are expected from it?

Stay tuned: I will update this blog post as the conversation continues.

Frank Day added to the list:

NPS is an industry standard measure.

He missed my point unfortunately. I didn’t start this conversation to debate if NPS is an industry standard measure. My question is whether it makes sense to track it for OpenStack when: a- OpenStack is not an industry but it’s an open source collaboration, it has no exact definition of its product (what is OpenStack in the context of the user survey? Only the core? the whole big tent? the tarball from upstream? a distribution from $VENDOR?) and finally, it lacks a definition of who its customers are (the survey has only a vague idea).

If we are to drop NPS, it should only be in exchange for some other measure of satisfaction.

He fails to explain why that should be important for a huge open source collaborative project like OpenStack. And also satisfaction of who, doing what is not clear.

Lauren Sell and Heidi Joy Tretheway gave a more thoughtful answer that I suggest to read in full. Some excerpts:

When we analyzed the latest user survey data, we looked at a demographic variable (user role, e.g. app developer), a firmographic variable (e.g. company size), and deployment stage. We learned that overall, there was no significant difference in NPS scores for people who identified with a specific function, such as app developers, or for companies by size. As a result, we didn’t do further data cuts on demographic/firmographic variables. We did learn that people with deployments in production tended to rate OpenStack more highly (NPS of 43 for production, vs 24 for dev/qa and 20 for POC).

One cause for variance is that unfortunately we’re not comparing apples to apples with the trending data in the latest survey report.

Going forward, I think we should focus on deployments as our trend line.

As a next stepthe independent analyst plans to draw up correlations (particularly for low scores) associated with particular technology decisions (e.g. projects or tools) and attitudinal data from the “your thoughts” section (e.g. we might find that firms that value X highly tend to rate OpenStack lowest).

I replied on the list saying that ultimately all this slicing and dicing is not going to tell us more than what we already know anecdotally and by looking at other data (that the community used to collect), suggesting that resources would better be allocated towards 1-1 interviews with survey respondents and other actions towards the community.

Roland Chan sent an update to the list after the meeting and the committee decided to keep analyzing this score. Heidi Joy will work with the data scientist, which means resources that could be better spent are being used to serve a corporate number.

The hard life of OpenStack application developers

I’m starting to feel the pain of developers attempting to write applications on top of OpenStack clouds: it hurts, and I haven’t come anywhere close to doing anything special.

I started following the getting started tutorial for First App Application For OpenStack (faafo) and I gut stuck on step 1, authentication. There are way too many things that are convoluted, confusing and sometimes carefully hidden (or ridiculously displayed) on Horizon that make it too hard to get started developing apps on OpenStack. I started following the Get Started document for python-libcloud:

You need the following information that you can obtain from your cloud provider:

auth URL
user name
password
project ID or name (projects are also known as tenants)
cloud region

Looks simple but it’s not. The auth URL doesn’t include the path say libcloud documentation but in practice every cloud I tried behaves differently. DreamHost and Runabove require not only v2.0/ but also /tokens after the base URL (https://keystone.dream.io/v2.0/tokens) while CityCloud and HPCloud seem to work according to the documentation. Some will throw weird errors if you add /tokens. Libcloud is also confusing regarding support for Keystone API v3 (which Citycloud uses): the official docs don’t mention v3 (bug), but a blog post from libcloud maintainer provides examples on how to use it.

Finding the right projectID or name is also challenging because some clouds don’t show the project ID immediately in the web control panel (and not every cloud I tested runs OpenStack Horizon). Chances are you’ll have to find the RC file, which is fairly easy if the cloud you’re targeting is using Horizon.

Even after I found all the pieces, I haven’t managed to authenticate using libcloud on CityCloud, Rackspace (using openstack provider) and HP. Only with OVH I managed to authenticate and get a list of images with my stock Ubuntu 15.04. I managed to authenticate on DreamHost only on Ubuntu 14.04 and on a clean, updated virtualenv.

It took a lot of trials and errors to get rid of the Method Not Allowed errors and get libcloud.common.types.InvalidCredsError: ‘Invalid credentials with the provider’, which at least is hinting that I have guessed the auth_url for Citycloud and HP. But I have no idea why credentials are not accepted, since the username and password are those I use on the Horizon panel on HP and on on the custom Citycloud panel. My guess is that I haven’t guessed correctly the project_name or tenant_id or whatever it’s called. This is too confusing.

OVH seems to works, but it requires the full url:

auth_url = ‘https://auth.runabove.io/v2.0/tokens

it won’t work without /v2.0/tokens.  So, after spending one day reading docs, trying to guess permutations I started thinking that probably libcloud is not a feasible approach.

I’ve started looking at OpenStack Infra homegrown interoperability library shade instead: authentication went smoothly immediately with DreamHost and HP on my machine. At least that part was easy and I managed to make some progress in my test with that, finally. Hopefully by the end of the day I’ll have the FAAFO running on

The end result is that it shouldn’t be this hard, even for a very modest developer like me, following a well written, copy-and-paste tutorial should not be as confusing. A lot of work needs to be done to make OpenStack clouds friendly to app developers.

So long OpenStack community, see you soon

Let’s call it a “ciao” and not an “addio.”

After almost four years (un)managing the OpenStack community, I have decided to move on and join DreamHost’s team to lead the marketing efforts of the DreamCloud. The past three years and 10 months have been amazing: I’ve joined a community of about 300 upstream contributors and saw it grow under my watch to over 3,600. I knew OpenStack would become huge and influence IT as much as the Linux kernel did and I still think it’s true. I’m really proud to have done my part to make OpenStack as big as it is now.

During these years, I’ve focused on making open source community management more like a system, with proper measurement in place and onboarding programs for new contributors. I believe that open source collaboration is just a different form of engineering and as such it should be measured correctly in order to be better managed. I am particularly proud of the Activity Board, one of the most sophisticated systems to keep track of open collaboration. When I started fiddling with data from the developers’ community, there were only rudimentary examples of open source metrics published by Eclipse Foundation and Symbian Foundation. With the help of researchers from University of Madrid, we built a comprehensive dashboard for weekly tracking of raw numbers and quarterly reports with sophisticated, in-depth analysis. The OpenStack Activity Board may not look as pretty as Stackalytics, but the partnership with its developers makes it possible to tap into the best practices of software engineering metrics. I was lucky enough to find good partners in this journey, and to provide an example that other open source communities have followed, from Wikimedia to Eclipse, Apache and others.

OpenStack Upstream Training is another example of a great partner: I was looking into a training program to teach developers about open source collaboration when I spoke in Hong Kong to an old friend, Loic Dachary. He told me about his experiment and I was immediately sold on the idea. After a trial run in Atlanta, we scaled the training up for Paris, Vancouver, plus community members repeated it in Japan already twice. I’m sure OpenStack Upstream Training will be offered also in Tokyo.

It’s not a secret that I can’t stand online forums and that I consider mailing lists a necessary evil. I setup Ask OpenStack for users hoping to provide a place for them to find answers. It’s working well, with a lot of traffic in the English version and a lot less traffic in Chinese. My original roadmap was to provide more languages but we hit some issues with the software powering it (Askbot) that I hope the infra team and the excellent Marton Kiss can solve rapidly.

On the issue of diversity, both gender and geographic, I’m quite satisfied with the results. I admit that these are hard problems that no single community can solve but each can put a drop in the bucket. I believe the Travel Support Program and constant participation in Outreachy are two such drops that help OpenStack be a welcoming place for people from all over the world and regardless of gender. The Board has also recently formalized a Diversity working group.

Of course I wish I did some things better, faster. I’m sorry I didn’t make the CLA easier for casual and independent contributors: I’m glad to see the Board finally taking steps to improve the situation. I wish also I delivered the OpenStack Groups portal earlier and with more features but the dependency on OpenStackID and other projects with higher priorities delayed it a lot. Hopefully that portal will catch up.

I will miss the people at the OpenStack Foundation: I’ve rarely worked with such a selection of smart, hard workers and fun to be around, too. It’s a huge privilege to work with people you actually want to go out with, talk about life, fun, travel, beers, wines and not work.

When we Italians say “ciao,” it means we’re not saying good bye for long.

So long, OpenStack community, see you around the corner.

A week with Dell XPS13 9343 (2015 model)

Last week I have received the new Dell XPS 13 from the Sputnik program, the one with Ubuntu pre-installed. I wanted to vote with my wallet since I believe Ubuntu is a pretty solid desktop environment, on par with Mac OS X and various Windows.

Design-wise  Dell has produced a very good looking machine, nothing to say about that. Kudos to Dell’s team on designing something much prettier than a Macbook Air. The screen with no border is fantastic with almost no frame and it’s great to look at it.

The only complaint I have towards Dell are the options they picked for the Sputnix program: the only way not to get a touchscreen is to buy a severely limited machine with only 128MB disk. No way. All the other options force you to spend money and sacrifice battery power on a useless feature.

I think touchscreens are uncomfortable to use on desktops and I said so a long time ago. Unless the desktop OS is radically re-designed for touch and hand gestures on the monitor, it makes no sense. I would have never bought the touchscreen if Dell had offered a 256MB option with the regular monitor.

On the Ubuntu side there are quite a few glitches like this issue with the cursor becoming sticky on some applications even if the touch-to-click on the touchpad is disabled and some difficulty to adapt to the ultradense display. By the way, that’s another reason not to get the touchscreen: lower resolution is good enough on such a small laptop anyway. Installing Ubuntu Vivid also was a bit more painful than I thought.

All in all, I didn’t return the laptop as I thought I would, mainly because I needed to upgrade to a machine with 8GB rapidly.

Post-summit summary of OpenStack Paris

Seven straight full time working days, starting from 9am Saturday and Sunday for the second edition of OpenStack Upstream Training to the feedback session on Friday evening at 6pm are finally over and I’m starting to catch up.

I spoke on Monday, highlighting some of the findings of my research on how to change organizations so they can better contribute to OpenStack, later lead with Rob, Sean and Allison the creation of the Product working group (more on this later). The double-length session dedicated to addressing the growth pain in Neutron has removed the “if” and left the question to “how and when do we empower drivers and plugin developers to take their fate in their own hands”.

I think this has been one of the most productive summit, I’m leaving Paris with the feeling we keep on improving our governance models and processes to better serve our ecosystem. Despite the criticisms, this community keeps on changing and adapts to new opportunities and threats like nothing I’ve seen before. I’m all charged up and ready for the six months of intense work leading up to Vancouver!

Next steps for ‘Hidden Influencers’

With Paris only weeks away it’s time to announce that we have a time and place to meet people whose job is to decide what OpenStack means for their company. The OpenStack Foundation has offered a room to meet in Paris on Monday, November 3rd, in the afternoon: please add the meeting to your schedule.

There have been discussions in the past weeks about the hidden influencers, there was a recent call by Randy Bias to identify people, functions, groups, anything who can own the product OpenStack. The time is ripe to get to know who directly control the engineers’ priorities and complete the circle of end users, operators, product owners, developers.

Rob, Sean, Allison and I have the idea of creating a working group modelled after the Operators group: get people in the same room, some time mid-cycle and sync up on the development effort. It’s just an idea, mid-cycle is when the development roadmap gets more realistic, it’s more clear which blueprints are in good shape and which are less likely to make it into the release. We believe it would be valuable to have a moment when product decision makers can coordinate and get a chance to share their priorities.

The room is available for 4 hours, but of course we can use for less time if needed. It makes sense to start hacking some of the topics in the next weeks, before Paris. Please join the mailing list and introduce yourself while the agenda will be drafted on the etherpad. For naming the group you can join the async brainstorming session (drop names at random and don’t judge yourself nor others: everything is fair game).

 

Improving the content of OpenStack wiki

The pages on wiki.openstack.org have been growing at a fast pace and it’s  time to give the wiki more shape: new contributors, end users and operators are having a hard time finding documentation since over time it spreads across many places. The wiki can have a role at directing readers where most appropriate. Luckily we have a team ready to help give the ~350 pages a more solid navigation and fixing content while at it:

  • Katherine Cranford (a trained taxonomist) volunteered to get through the wiki pages and propose a taxonomy for the wiki.
  • Shari Mahrdt, a recent hire by the Foundation, has volunteered a few hours per week to implement the taxonomy in the wiki pages, setup templates and write documentation to maintain the wiki.
  • I am overseeing the implementation and looking more carefully at content for contributors.

We are keeping track of things to do on the etherpad: Action_Items_OpenStack_Wiki. Shari and I started implementing Katherine’s proposed taxonomy: it’s visible as a navigable tree on https://wiki.openstack.org/wiki/Category:Home.

As an example of how the taxonomy works, let’s look at the tree of OpenStack Programs. One can think of Programs as teams of people using tools (code repository, bug tracker, etc) and coordinated processes to deliver one or more project to achieve a clearly stated objective. For  example, the Telemetry Program has a team of core reviewers responsible to drive development in code repositories for the Ceilometer project and the Ceilometer client, and each has pages for blueprints and specs, meeting notes and more. Programs contain projects, so the tree of categories under Programs will look like:
  • Programs
    • Telemetry
      • Ceilometer
        • Client
        • Blueprint
    • Block Storage
      • Cinder
        • Client
        • Blueprints
    • Compute
      • Nova
        • Client
        • Blueprints

You can see this live on the wiki on https://wiki.openstack.org/wiki/Category:Programs. Fairly straightforward except that over the years some of the pages have started with describing a project and have now been repurposed to illustrate a Program. Look at Nova page for example: the name of the page is “Nova” but the title is OpenStack Compute. We’ll definitely have to shuffle content around.  For example, the Category:Programs page can be considered a duplicate of the wiki https://wiki.openstack.org/wiki/Programs page: since everything on mediawiki is a page, the Category pages can be edited and can be redirected to/from like all other pages. In this case, it would make sense to make high level content like Programs more of a dynamic page, like Category:Programs. The cool thing of this approach is that we can probably create new category page for new programs automatically when modifications to the programs.yaml are approved via jenkins.

Adding a taxonomy and templates (more on this later) will help newcomers discover relevant content and find information more easily. While we implement the changes to the wiki we’ll also be reviewing the content on the pages, delete or mark as obsolete the old stuff and make things more readable for all. You can keep up with the progress by looking at RecentChanges.

If you’d like to help us out or find out more please feel free to contact stefano@openstack.org and shari@openstack.org

How is your OpenStack team organized?

I’ve been collecting a lot of good insights talking to directors and managers about how their companies are organized to contribute to OpenStack. For geographic reasons I have mostly gathered comments from people between San Francisco and Silicon Valley and I’d like to expand the research.

I’m especially interested in learning about internal processes, system of incentives, things that impact performance evaluation for engineers contributing to OpenStack.

To expand the research I’m asking the OpenStack community to fill in this brief survey or contact me directly (stefano openstack.org) for a quick interview.