Why you should not use Slack for volunteer communities

Last night I was asked again to join a Slack channel during a community event and I lost it. I lost the patience for this constant push into a walled garden that I can accept only at work. I don’t want my email to be given away to a company so they can brag about their growth rate… and for what in exchange? More work for me to signup, pay attention to terms of service, unsubscribe, remove notifications…

No! No! and NO! Community managers, don’t use Slack and please note:

  • It’s tacky to ask volunteers to surrender their email address to a third party who will use to send “occasionally” unrequested “news and announcements”. No, thank you.
  • It’s annoying to force your volunteers to signup for yet another service. Click click click click email click-verify tutorial etc. No, thank you.
  • It’s wrong to archive your volunteers conversations and credentials in a big fat place where the next criminal will grab them. Because you know it will eventually happen, right? No, thank you.

Slack works so well in work environment because it keeps history, it’s very good on mobile, its notifications can be fine tuned… it’s pervasive, and very effective… at work! But the last thing I want as a volunteer is to spend time to fine tune notifications for each and every group I join.

Also, you can’t expect volunteers to keep up with the history of a channel (hey, hello, hi, wazzup, thank you, great, awesome, gif, gif gif… ), so that Slack feature is not useful.  As a community manager, you should know that there is always one that abuses of the @here @channel @all shortcuts to ask moronic “support” questions in the most populated #general channel. There you have your daily “@all it doesn’t work!” even if there is a channel called #support.

Buzz off, and RTFM! I said it!

There are better ways, not intrusive, easy to start and quit when the meeting is done. Etherpads have chat: do the volunteer work, take notes, share links on the chat. If etherpad is too complicated, I’d accept Google Docs.

Do you just need a temporary channel to chat? Just create one on the fly with freenode web chat, mibbit or any other IRC on web. Hit it and quit it: chances are, the archives of your meeting are not going to be read by anybody anyway. Let your community focus on the asynchronous systems: email works well, forums, your website etc.

You should not give away your community members : they’re not yours to give, in the first place!

DreamHost Gives Free SSL/TLS Certificates with Let’s Encrypt

DreamHost has done the right thing, deciding to let go of a portion of its revenues in favor of free SSL/TLS certificates for everybody.

It’s a great pleasure to work for such a company, knowing every day that customers come first and revenues are a side effect. DreamHost may not be the multi-billion dollar juggernaut in the industry but the people here are helpful, nice, competent and believe in what they do.

Let’s Encrypt is another contribution to a free society, pairing up with the contributions Ceph, Astara and OpenStack, WordPress and many other free software/open source projects directly and indirectly sponsored by DreamHost.

Source: Free SSL/TLS Certificates at DreamHost with Let’s Encrypt

PS: we’re hiring a Cloud Storage engineer.

Meet OpenStack at FOSDEM 2016

The most important gathering of free software and open source enthusiasts in Europe is coming on Jan 30-31, in Brussels and OpenStack will have a table booth there, plus many talks.

I look forward to go there to get the pulse on the open source community regarding the abolition of Safe Harbor provision, and how that impacts users of US-based public clouds, like DreamHost. It’s a complicated issue that I’ve just started looking into. One of the talks I’ll make sure to attend to is Rosario Di Somma’s talk about Magnum in production: he’s been evaluating Magnum for DreamHost and he’ll share the first results of his investigations.

On the OpenStack side of things, Adrien Cunin is the main point of contact and he’s looking for volunteers to preside at the official OpenStack table at FOSDEM. If you have holes in your agenda, please consider adding your name to the etherpad.

Developing apps on OpenStack is still too complicated

Yesterday’s meeting with the OpenStack App Developer Working Group proved that new developers approaching OpenStack enter a system designed to make them fail.

The community seems to be distracted to uncover new problems while the old and known problems are not being addressed. I’m running for a seat on OpenStack Foundation Board: if you care about the developer’s experience, consider voting for me (search your inbox for OpenStack Foundation – 2016 Individual Director Election, you can also change your vote.)

The App Developer Working Group drafted a detailed analysis of the app developer experience with major cloud providers AWS, Azure and Google Compute and compared such experience with Rackspace Cloud. It’s a great piece of work.

The first thing you notice in the report? The team at Intel who ran the analysis chose Rackspace as an ‘OpenStack reference cloud’.  That choice is debatable but during the meeting when we discussed alternatives, it became clear that there is no good choice! There is no vanilla OpenStack implementation when it comes to application developers, they’re all snowflakes (as Randy Bias put it)… All of the public clouds in OpenStack Marketplace have made choices that affect, one way or another, app developers. If we want to assess the whole development experience on OpenStack, we need a different framework.

As an open source community we can’t compare AWS, Azure and Google Compute to either all of OpenStack public clouds or only one. Powering up TryStack to be an app developers playground wouldn’t really work either.

This is a much larger conversation: we need to discuss more on the User Committee mailing list, do less in-person meetings, share intentions online with others before turning them into actions and waste time.

I’m concerned by the lack of focus within the community: when operating OpenStack became a visible issue, the whole community focused on helping operators out. We need to do the same for app developers.

What is an OpenStack Powered Compute?

The OpenStack Technical Committee voted a resolution suggesting to the OpenStack Board to modify the definition of “OpenStack Powered Compute” to include statements such as

An `OpenStack Powered Compute`_ Cloud MUST be able to boot a Linux Guest

This is quite a change in OpenStack DefCore efforts, since as Rob says

The fundamental premise of DefCore is that we can use the development tests for API and behavior validation

DefCore has always been about the OpenStack API and carefully avoided checking the implementation of clouds, leaving enough space for vendors to differentiate their products without harming consumers. An ironic twist of fate is now forcing the whole program to take a stand on implementation, too.

Stating that an OpenStack cloud MUST be able to boot a Linux Guest  is the most controversial and I see why the TC is going into this direction: as a OpenStack user, I expect to be able to upload my images in any OpenStack cloud. Given that Linux is the OS of choice for the vast majority of today’s cloud workloads, booting my Linux image is a must have. It’s a practical choice and one that makes a lot of sense.

The problem is that forcing implementation details by the TC is a slippery slope. Will the TC suggest also that any OpenStack cloud must offer a routable IP or boot a VM in less than 5 seconds or that flavor names are always the same across all OpenStack clouds? Granted, all these requests make sense from the perspective of at least some users: there are already lots of unnecessary complications for putting workloads on OpenStack right now.

The question is if those mandates make sense for OpenStack as a whole and I’m not convinced they do. I tend to lean towards two complimentary positions:

  1. make sure that OpenStack Compute Powered clouds are transparent and their behavior is discoverable. Like the OpenStack Foundation exposes the results of the DefCore API compatibility on the Marketplace, I think DefCore should test the implementation of the clouds, public clouds especially and expose the results. This way a user would know that a cloud passes DefCore tests, runs OpenStack upstream code (as done now) and allows uploading/booting Linux guests, offer IPv6 by default, boot in x seconds, has XYZ flavors, allows custom flavors, etc.
  2. DefCore should consider public clouds, hosted private clouds and distributions as different beasts. To me it makes more sense to expect a public cloud to allow uploading Linux guests than mandate the same for a private clod… even less sense for a distribution. The buying process for these is different and the needs for users are different, too.


I’m a candidate for OpenStack Foundation Board of Directors: if you like what you read, remember to vote for me at the 2016 elections

Balancing IRC and email, the secret of success for open collaboration

Are seasoned contributors to OpenStack giving too much importance to IRC, and is that creating bad side effects in other areas of the community?

Since I started contributing more to working groups outside of the OpenStack upstream contributor community, I noticed that email is used in a limited way, with a couple of lists having either very low traffic or low signal/noise ratio. The email lists are often used to share calendar notifications for meetings and their agendas with often little to no discussions. Have we insisted so much in holding weekly meetings “like developers do” that now people think that email discussions are less important?

This weekend a couple of posts on Planet OpenStack talk about IRC: one very good source of information on how to use IRC better from Chris Aedo who suggests to stay more on IRC. He argues:

If you are interested in becoming an OpenStack contributor, having a persistent IRC presence might be one of the most important secrets to success. Anyone who spends time in one of the more active project channels will immediately see the value of synchronous communication here where problems are sometimes solved in minutes.

On the other hand of the spectrum Chris Dent who highlights the chaotic nature of IRC. He mentions that too  much IRC leads to:

no opportunity to be truly away, frequent interruptions, an inflated sense of urgency, and a powerful sense of “oh noes, I’m missing what’s happening on IRC!!!!”—but the killer is the way in which it allows, encourages and even enforces the creation and expression of project knowledge within IRC, leaving out people who want or need to participate but are not synchronous with the discussion.

This last point is crucial: not everybody can be online all the time, some of us have to sleep or go see a movie at times. IRC is often mentioned as a blocker to new community members. I advocate using a bouncer only to get personal messages, not to read the backlogs.

While synchronous communication is crucial for distributed collaboration, a sane habit of using email is as important. The OpenStack community has put so much emphasis on using IRC to hold weekly meeting and to resolve the most controversial conversations in real-time that some new comers now think that synchronous communication is more important than async email.

A sane balance of sync-async communication is more crucial to success of open source collaboration, mixing email and IRC (which is what most of OpenStack upstream developers do, by the way.)

All in all, I think there is also a need to teach people how to hold conversations via email, with proper quoting, no top posting and other nice things to do… but also email sorting and filtering client-side, what not to say (hint: “me too” messages are generally considered noise)… a lost art, as much as using IRC.

Why OpenStack should stop tracking its Net Promoter Score

TL;DR because it makes no sense for OpenStack and it provides too much distraction at a time when the Foundation should focus on addressing more specific and actionable concerns tied to its mission “to inform the roadmap based on user’s opinions and deployment choices.

I’d like the OpenStack community to focus on the fundamental issues of OpenStack adoption, ease and speed of development and ease of use by application developers. There is plenty of evidence that such areas need attention and there are already metrics tracking them: adding Net Promoter Score (NPS) is a distraction.

This is part of the reason why I’m running for Individual Director of the OpenStack Board. Read the rest of my candidacy pitch and hit me with any questions, comments or concerns you have over email, twitter, IRC (@reed), phone call, smoke signals … whatever works for you!

So, there are many strong reasons for removing the Net Promoter Score from the survey altogether. The main objection: NPS is used to track customer loyalty to a brand. For typical corporations, they can identify with precision customers and brand, therefore effectively measure such loyalty.

On the other hand, the OpenStack Foundation has many types of customers, the concept of its products is not exactly defined and ultimately OpenStack can’t be considered a brand in the sense of the original HBR article that launched NPS. The User Survey is answered by everyone from cloud operators or sys-admins to OpenStack upstream developers,  app developers / deployers and even a mystery blob called “Other” in a multiple choice answer. The question, “How likely are you to recommend OpenStack to a friend or colleague?” can be interpreted in too many ways to make any sense. Who exactly is the [Promoter|Detractor], who are their friends and colleagues, what exactly is “OpenStack” to them? Did they get some pieces (which ones?) of OpenStack from a vendor? Did they get the tarballs? Are they skilled to use whatever they have purchased or downloaded for free?

I’d argue that the NPS collected in the Survey in its current form has no value whatsoever.

 - Dilbert by Scott Adams
The OpenStack NPS went from #$ to *&

I asked the User Committee to answer these questions: Is the User Committee convinced that an open source project like OpenStack gets any value from tracking this score? Why exactly is that number tracked in the survey, what exactly does the UC want to get from that number, what actionable results are expected from it?

Stay tuned: I will update this blog post as the conversation continues.

Frank Day added to the list:

NPS is an industry standard measure.

He missed my point unfortunately. I didn’t start this conversation to debate if NPS is an industry standard measure. My question is whether it makes sense to track it for OpenStack when: a- OpenStack is not an industry but it’s an open source collaboration, it has no exact definition of its product (what is OpenStack in the context of the user survey? Only the core? the whole big tent? the tarball from upstream? a distribution from $VENDOR?) and finally, it lacks a definition of who its customers are (the survey has only a vague idea).

If we are to drop NPS, it should only be in exchange for some other measure of satisfaction.

He fails to explain why that should be important for a huge open source collaborative project like OpenStack. And also satisfaction of who, doing what is not clear.

Lauren Sell and Heidi Joy Tretheway gave a more thoughtful answer that I suggest to read in full. Some excerpts:

When we analyzed the latest user survey data, we looked at a demographic variable (user role, e.g. app developer), a firmographic variable (e.g. company size), and deployment stage. We learned that overall, there was no significant difference in NPS scores for people who identified with a specific function, such as app developers, or for companies by size. As a result, we didn’t do further data cuts on demographic/firmographic variables. We did learn that people with deployments in production tended to rate OpenStack more highly (NPS of 43 for production, vs 24 for dev/qa and 20 for POC).

One cause for variance is that unfortunately we’re not comparing apples to apples with the trending data in the latest survey report.

Going forward, I think we should focus on deployments as our trend line.

As a next stepthe independent analyst plans to draw up correlations (particularly for low scores) associated with particular technology decisions (e.g. projects or tools) and attitudinal data from the “your thoughts” section (e.g. we might find that firms that value X highly tend to rate OpenStack lowest).

I replied on the list saying that ultimately all this slicing and dicing is not going to tell us more than what we already know anecdotally and by looking at other data (that the community used to collect), suggesting that resources would better be allocated towards 1-1 interviews with survey respondents and other actions towards the community.

Roland Chan sent an update to the list after the meeting and the committee decided to keep analyzing this score. Heidi Joy will work with the data scientist, which means resources that could be better spent are being used to serve a corporate number.

How I evaluate submissions for talks at OpenStack Summits

As a track chair, I look for content that will be informative for participants live at the event. I also look for entertaining content, well delivered, clear and usable also as a complement to written documentation, something to be enjoyed also after the event on the video channel.

I’ve been a track chair for OpenStack Summit tracks many times and recent discussions on the community mailing list made me decide to publish the principle that have guided my decisions.

When I’m evaluating a talk I look at:

  • the title
  • the abstract
  • the bio of the speaker

A good title conveys immediately what the talk is about and some idea of the argument or area of discussion. A good abstract expands on the title describing the thesis, the argument and the conclusion, includes also a reason for an identified audience to go see it at the conference, or later on youtube. The bio of the speaker needs to reinforce all of the above, explain why the speaker is the best person to deliver the talk.

By looking only at title, abstract and bio, I discard bad titles, bad abstracts and speakers with a bad bio and who are not known to me, to linkedin, to slideshare and to google in general. Usually that leaves me with twice as many talks as slots available in the track.

The next criteria I use to discard other talks are: does this talk fit with the overall objective of the Summit? Does it fit with the specific objective of the track? For example, the objective of the Tokyo summit is to focus on application developers and containers. And for the How To Contribute track, the colleagues track chairs decided to focus on content that we didn’t hear before or needed update and to give precedence to more regional-specific content. This pass usually identifies a couple of clear winners and 2-3 tracks with very little debate.

Now it’s time for group deliberation to finish the selection, starting from those that gained the majority of support and asking for a dissenting opinion. All of our arguments were around title, abstract and bio of the presenters, those provided more than enough information to make a decision and we never had to use anything else.

We never looked at the public votes because those are easily gamed and I think it would be unfair at this point to give priority to someone only because they work for an organization that can promote talks during the voting process. Each candidate needs to be evaluated based on what they bring to the Summits, not on their marketing teams.

My argument against using the results votes of the public voting process to judge proposals is exactly that, at best, those votes don’t provide any value: bad titles and bad abstracts proposed by non-involved individuals will gather very few votes anyway, but those are very easy to discard for track chairs anyway. So no value here. But once the decent tracks remain under consideration, looking at the public votes result may skew track chair decisions towards the usual people that speak every time, with lots of twitter followers or the ones working for companies with well organized marketing departments.

To me the votes are the result of a popularity contest and if used for anything, they dramatically damage the minorities that are not on twitter, the people who are shy by nature and those working for companies that don’t have a strong social media presence (or don’t use it at all). In fact, I’d argue that the results of the votes should be even hidden in the track chair UI.

I always considered the voting process as a marketing tool for the event, a community ritual, a celebration of the OpenStack community as a whole and not something that the selection committee should use. I find looking at votes extremely unfair to the submitters and diminishing of the selection committee’s role, too. IMO a good committee should evaluate based on quality of content relative to the objectives for that specific summit (overall focus, location), and totally ignore the popularity of their proposers (or their employees).

I wish we could have the data to analyze and demonstrate if the votes are the expression of a larger community or, as I suspect, are just the result of twitter reactions and push from marketing efforts of few companies. My gut feeling is that with thousands of proposals nobody has the possibility to read and vote them all. So the proposals voted are necessarily only a fraction.  Also, of all the people rotating around OpenStack, few of them vote. This means that
few people find some talk proposals. Which talks are more popular? I notice how well organized are companies like Rackspace, Red Hat, Mirantis, IBM, Tesora and few others blogging about their proposals when time comes. Maybe if the data show that the talks from employees of those companies are the most voted probably we can infer that you either push your talks or your talk doesn’t get voted.

Am I suggesting getting rid of voting all together?  No, that’s not what I’m advocating. I think the voting process is valuable for the summit as a whole and for the community as a whole. It’s a ritual, it’s a celebration, it’s a preparation to the event, it’s a collective, fun activity that we repeat every six months. The voting process is not broken and needs no fixing: it’s great. Only the results IMO are useless for selecting good content at the Summit.

The hard life of OpenStack application developers

I’m starting to feel the pain of developers attempting to write applications on top of OpenStack clouds: it hurts, and I haven’t come anywhere close to doing anything special.

I started following the getting started tutorial for First App Application For OpenStack (faafo) and I gut stuck on step 1, authentication. There are way too many things that are convoluted, confusing and sometimes carefully hidden (or ridiculously displayed) on Horizon that make it too hard to get started developing apps on OpenStack. I started following the Get Started document for python-libcloud:

You need the following information that you can obtain from your cloud provider:

auth URL
user name
project ID or name (projects are also known as tenants)
cloud region

Looks simple but it’s not. The auth URL doesn’t include the path say libcloud documentation but in practice every cloud I tried behaves differently. DreamHost and Runabove require not only v2.0/ but also /tokens after the base URL (https://keystone.dream.io/v2.0/tokens) while CityCloud and HPCloud seem to work according to the documentation. Some will throw weird errors if you add /tokens. Libcloud is also confusing regarding support for Keystone API v3 (which Citycloud uses): the official docs don’t mention v3 (bug), but a blog post from libcloud maintainer provides examples on how to use it.

Finding the right projectID or name is also challenging because some clouds don’t show the project ID immediately in the web control panel (and not every cloud I tested runs OpenStack Horizon). Chances are you’ll have to find the RC file, which is fairly easy if the cloud you’re targeting is using Horizon.

Even after I found all the pieces, I haven’t managed to authenticate using libcloud on CityCloud, Rackspace (using openstack provider) and HP. Only with OVH I managed to authenticate and get a list of images with my stock Ubuntu 15.04. I managed to authenticate on DreamHost only on Ubuntu 14.04 and on a clean, updated virtualenv.

It took a lot of trials and errors to get rid of the Method Not Allowed errors and get libcloud.common.types.InvalidCredsError: ‘Invalid credentials with the provider’, which at least is hinting that I have guessed the auth_url for Citycloud and HP. But I have no idea why credentials are not accepted, since the username and password are those I use on the Horizon panel on HP and on on the custom Citycloud panel. My guess is that I haven’t guessed correctly the project_name or tenant_id or whatever it’s called. This is too confusing.

OVH seems to works, but it requires the full url:

auth_url = ‘https://auth.runabove.io/v2.0/tokens

it won’t work without /v2.0/tokens.  So, after spending one day reading docs, trying to guess permutations I started thinking that probably libcloud is not a feasible approach.

I’ve started looking at OpenStack Infra homegrown interoperability library shade instead: authentication went smoothly immediately with DreamHost and HP on my machine. At least that part was easy and I managed to make some progress in my test with that, finally. Hopefully by the end of the day I’ll have the FAAFO running on

The end result is that it shouldn’t be this hard, even for a very modest developer like me, following a well written, copy-and-paste tutorial should not be as confusing. A lot of work needs to be done to make OpenStack clouds friendly to app developers.