Medium is “pivoting”

I’m glad to read Ev Williams finally admitting that:

the broken system is ad-driven media on the internet. It simply doesn’t serve people. In fact, it’s not designed to.

He’s a smart person, I’m sure the next iteration of Medium will be interesting, no matter what the outcome will be.

Source: Renewing Medium’s focus

Prototyping applications with Airtable

Gotta say: using spreadsheet as poor-man’s database makes me feel poor every time. Google Sheets is so convenient that everybody starts a new sheet to hold some information in a table. The problem is, sheets are so convenient that some sheets keep on starting again, and again. Soon the company has 20 sheets holding bad information. It’s the tragedy of the corporate wikis all over again.

Instead I’m one of the few who used to love Microsoft Access: I know, it’s bad as a database but to rapidly prototyping small applications it was awesome. As a poor-mans database, Access was at least credit-worthy compared to spreadsheets.

Unfortunately Google doesn’t have something similar to MS Access so when I discovered Airtable, I got really happy. I’ve prototyped a small application to keep track of conferences and call for papers. Finally I don’t have to keep entering the same data every year in a new sheet and I can keep tables in fairly normalized form. Nice stuff. I wish Google Apps buys it … and the cynical in me says: “so we can have dozens of similar databases instead of hundreds of similar spreadsheets (the same tragedy, at a smaller scale).”

Why you should not use Slack for volunteer communities

Last night I was asked again to join a Slack channel during a community event and I lost it. I lost the patience for this constant push into a walled garden. I can accept that only at work. I don’t want my email to be given away to a company so they can brag about their growth rate… and for what in exchange? More work for me to signup, pay attention to terms of service, unsubscribe, remove notifications…

No! No! and NO! Community managers, don’t use Slack and please note:

  • It’s tacky to ask volunteers to surrender their email address to a third party who will use to send “occasionally” unrequested “news and announcements”. No, thank you.
  • It’s annoying to force your volunteers to signup for yet another service. Click click click click email click-verify tutorial etc. No, thank you.
  • It’s wrong to archive your volunteers conversations and credentials in a big fat place where the next criminal will grab them. Because you know it will eventually happen, right? No, thank you.

Slack works so well in work environment because it keeps history, it’s very good on mobile, its notifications can be fine tuned… it’s pervasive, and very effective… at work! But the last thing I want as a volunteer is to spend time to fine tune notifications for each and every group I join.

Also, you can’t expect volunteers to keep up with the history of a channel (hey, hello, hi, wazzup, thank you, great, awesome, gif, gif gif… ), so that Slack feature is not useful.  As a community manager, you should know that there is always one that abuses of the @here @channel @all shortcuts to ask moronic “support” questions in the most populated #general channel. There you have your daily “@all it doesn’t work!” even if there is a channel called #support.

Buzz off, and RTFM! I said it!

There are better ways, not intrusive, easy to start and quit when the meeting is done. Etherpads have chat: do the volunteer work, take notes, share links on the chat. If etherpad is too complicated, I’d accept Google Docs.

Do you just need a temporary channel to chat? Just create one on the fly with freenode web chat, mibbit or any other IRC on web. Hit it and quit it: chances are, the archives of your meeting are not going to be read by anybody anyway. Let your community focus on the asynchronous systems: email works well, forums, comments on your website etc.

You should not give away your community members : they’re not yours to give, in the first place!

Developing apps on OpenStack is still too complicated

Yesterday’s meeting with the OpenStack App Developer Working Group proved that new developers approaching OpenStack enter a system designed to make them fail.

The community seems to be distracted to uncover new problems while the old and known problems are not being addressed. I’m running for a seat on OpenStack Foundation Board: if you care about the developer’s experience, consider voting for me (search your inbox for OpenStack Foundation – 2016 Individual Director Election, you can also change your vote.)

The App Developer Working Group drafted a detailed analysis of the app developer experience with major cloud providers AWS, Azure and Google Compute and compared such experience with Rackspace Cloud. It’s a great piece of work.

The first thing you notice in the report? The team at Intel who ran the analysis chose Rackspace as an ‘OpenStack reference cloud’.  That choice is debatable but during the meeting when we discussed alternatives, it became clear that there is no good choice! There is no vanilla OpenStack implementation when it comes to application developers, they’re all snowflakes (as Randy Bias put it)… All of the public clouds in OpenStack Marketplace have made choices that affect, one way or another, app developers. If we want to assess the whole development experience on OpenStack, we need a different framework.

As an open source community we can’t compare AWS, Azure and Google Compute to either all of OpenStack public clouds or only one. Powering up TryStack to be an app developers playground wouldn’t really work either.

This is a much larger conversation: we need to discuss more on the User Committee mailing list, do less in-person meetings, share intentions online with others before turning them into actions and waste time.

I’m concerned by the lack of focus within the community: when operating OpenStack became a visible issue, the whole community focused on helping operators out. We need to do the same for app developers.

What is an OpenStack Powered Compute?

The OpenStack Technical Committee voted a resolution suggesting to the OpenStack Board to modify the definition of “OpenStack Powered Compute” to include statements such as

An `OpenStack Powered Compute`_ Cloud MUST be able to boot a Linux Guest

This is quite a change in OpenStack DefCore efforts, since as Rob says

The fundamental premise of DefCore is that we can use the development tests for API and behavior validation

DefCore has always been about the OpenStack API and carefully avoided checking the implementation of clouds, leaving enough space for vendors to differentiate their products without harming consumers. An ironic twist of fate is now forcing the whole program to take a stand on implementation, too.

Stating that an OpenStack cloud MUST be able to boot a Linux Guest  is the most controversial and I see why the TC is going into this direction: as a OpenStack user, I expect to be able to upload my images in any OpenStack cloud. Given that Linux is the OS of choice for the vast majority of today’s cloud workloads, booting my Linux image is a must have. It’s a practical choice and one that makes a lot of sense.

The problem is that forcing implementation details by the TC is a slippery slope. Will the TC suggest also that any OpenStack cloud must offer a routable IP or boot a VM in less than 5 seconds or that flavor names are always the same across all OpenStack clouds? Granted, all these requests make sense from the perspective of at least some users: there are already lots of unnecessary complications for putting workloads on OpenStack right now.

The question is if those mandates make sense for OpenStack as a whole and I’m not convinced they do. I tend to lean towards two complimentary positions:

  1. make sure that OpenStack Compute Powered clouds are transparent and their behavior is discoverable. Like the OpenStack Foundation exposes the results of the DefCore API compatibility on the Marketplace, I think DefCore should test the implementation of the clouds, public clouds especially and expose the results. This way a user would know that a cloud passes DefCore tests, runs OpenStack upstream code (as done now) and allows uploading/booting Linux guests, offer IPv6 by default, boot in x seconds, has XYZ flavors, allows custom flavors, etc.
  2. DefCore should consider public clouds, hosted private clouds and distributions as different beasts. To me it makes more sense to expect a public cloud to allow uploading Linux guests than mandate the same for a private clod… even less sense for a distribution. The buying process for these is different and the needs for users are different, too.

Thoughts?

I’m a candidate for OpenStack Foundation Board of Directors: if you like what you read, remember to vote for me at the 2016 elections

Why OpenStack should stop tracking its Net Promoter Score

TL;DR because it makes no sense for OpenStack and it provides too much distraction at a time when the Foundation should focus on addressing more specific and actionable concerns tied to its mission “to inform the roadmap based on user’s opinions and deployment choices.

I’d like the OpenStack community to focus on the fundamental issues of OpenStack adoption, ease and speed of development and ease of use by application developers. There is plenty of evidence that such areas need attention and there are already metrics tracking them: adding Net Promoter Score (NPS) is a distraction.

This is part of the reason why I’m running for Individual Director of the OpenStack Board. Read the rest of my candidacy pitch and hit me with any questions, comments or concerns you have over email, twitter, IRC (@reed), phone call, smoke signals … whatever works for you!

So, there are many strong reasons for removing the Net Promoter Score from the survey altogether. The main objection: NPS is used to track customer loyalty to a brand. For typical corporations, they can identify with precision customers and brand, therefore effectively measure such loyalty.

On the other hand, the OpenStack Foundation has many types of customers, the concept of its products is not exactly defined and ultimately OpenStack can’t be considered a brand in the sense of the original HBR article that launched NPS. The User Survey is answered by everyone from cloud operators or sys-admins to OpenStack upstream developers,  app developers / deployers and even a mystery blob called “Other” in a multiple choice answer. The question, “How likely are you to recommend OpenStack to a friend or colleague?” can be interpreted in too many ways to make any sense. Who exactly is the [Promoter|Detractor], who are their friends and colleagues, what exactly is “OpenStack” to them? Did they get some pieces (which ones?) of OpenStack from a vendor? Did they get the tarballs? Are they skilled to use whatever they have purchased or downloaded for free?

I’d argue that the NPS collected in the Survey in its current form has no value whatsoever.

 - Dilbert by Scott Adams
The OpenStack NPS went from #$ to *&

I asked the User Committee to answer these questions: Is the User Committee convinced that an open source project like OpenStack gets any value from tracking this score? Why exactly is that number tracked in the survey, what exactly does the UC want to get from that number, what actionable results are expected from it?

Stay tuned: I will update this blog post as the conversation continues.

Frank Day added to the list:

NPS is an industry standard measure.

He missed my point unfortunately. I didn’t start this conversation to debate if NPS is an industry standard measure. My question is whether it makes sense to track it for OpenStack when: a- OpenStack is not an industry but it’s an open source collaboration, it has no exact definition of its product (what is OpenStack in the context of the user survey? Only the core? the whole big tent? the tarball from upstream? a distribution from $VENDOR?) and finally, it lacks a definition of who its customers are (the survey has only a vague idea).

If we are to drop NPS, it should only be in exchange for some other measure of satisfaction.

He fails to explain why that should be important for a huge open source collaborative project like OpenStack. And also satisfaction of who, doing what is not clear.

Lauren Sell and Heidi Joy Tretheway gave a more thoughtful answer that I suggest to read in full. Some excerpts:

When we analyzed the latest user survey data, we looked at a demographic variable (user role, e.g. app developer), a firmographic variable (e.g. company size), and deployment stage. We learned that overall, there was no significant difference in NPS scores for people who identified with a specific function, such as app developers, or for companies by size. As a result, we didn’t do further data cuts on demographic/firmographic variables. We did learn that people with deployments in production tended to rate OpenStack more highly (NPS of 43 for production, vs 24 for dev/qa and 20 for POC).

One cause for variance is that unfortunately we’re not comparing apples to apples with the trending data in the latest survey report.

Going forward, I think we should focus on deployments as our trend line.

As a next stepthe independent analyst plans to draw up correlations (particularly for low scores) associated with particular technology decisions (e.g. projects or tools) and attitudinal data from the “your thoughts” section (e.g. we might find that firms that value X highly tend to rate OpenStack lowest).

I replied on the list saying that ultimately all this slicing and dicing is not going to tell us more than what we already know anecdotally and by looking at other data (that the community used to collect), suggesting that resources would better be allocated towards 1-1 interviews with survey respondents and other actions towards the community.

Roland Chan sent an update to the list after the meeting and the committee decided to keep analyzing this score. Heidi Joy will work with the data scientist, which means resources that could be better spent are being used to serve a corporate number.

How I evaluate submissions for talks at OpenStack Summits

As a track chair, I look for content that will be informative for participants live at the event. I also look for entertaining content, well delivered, clear and usable also as a complement to written documentation, something to be enjoyed also after the event on the video channel.

I’ve been a track chair for OpenStack Summit tracks many times and recent discussions on the community mailing list made me decide to publish the principle that have guided my decisions.

When I’m evaluating a talk I look at:

  • the title
  • the abstract
  • the bio of the speaker

A good title conveys immediately what the talk is about and some idea of the argument or area of discussion. A good abstract expands on the title describing the thesis, the argument and the conclusion, includes also a reason for an identified audience to go see it at the conference, or later on youtube. The bio of the speaker needs to reinforce all of the above, explain why the speaker is the best person to deliver the talk.

By looking only at title, abstract and bio, I discard bad titles, bad abstracts and speakers with a bad bio and who are not known to me, to linkedin, to slideshare and to google in general. Usually that leaves me with twice as many talks as slots available in the track.

The next criteria I use to discard other talks are: does this talk fit with the overall objective of the Summit? Does it fit with the specific objective of the track? For example, the objective of the Tokyo summit is to focus on application developers and containers. And for the How To Contribute track, the colleagues track chairs decided to focus on content that we didn’t hear before or needed update and to give precedence to more regional-specific content. This pass usually identifies a couple of clear winners and 2-3 tracks with very little debate.

Now it’s time for group deliberation to finish the selection, starting from those that gained the majority of support and asking for a dissenting opinion. All of our arguments were around title, abstract and bio of the presenters, those provided more than enough information to make a decision and we never had to use anything else.

We never looked at the public votes because those are easily gamed and I think it would be unfair at this point to give priority to someone only because they work for an organization that can promote talks during the voting process. Each candidate needs to be evaluated based on what they bring to the Summits, not on their marketing teams.

My argument against using the results votes of the public voting process to judge proposals is exactly that, at best, those votes don’t provide any value: bad titles and bad abstracts proposed by non-involved individuals will gather very few votes anyway, but those are very easy to discard for track chairs anyway. So no value here. But once the decent tracks remain under consideration, looking at the public votes result may skew track chair decisions towards the usual people that speak every time, with lots of twitter followers or the ones working for companies with well organized marketing departments.

To me the votes are the result of a popularity contest and if used for anything, they dramatically damage the minorities that are not on twitter, the people who are shy by nature and those working for companies that don’t have a strong social media presence (or don’t use it at all). In fact, I’d argue that the results of the votes should be even hidden in the track chair UI.

I always considered the voting process as a marketing tool for the event, a community ritual, a celebration of the OpenStack community as a whole and not something that the selection committee should use. I find looking at votes extremely unfair to the submitters and diminishing of the selection committee’s role, too. IMO a good committee should evaluate based on quality of content relative to the objectives for that specific summit (overall focus, location), and totally ignore the popularity of their proposers (or their employees).

I wish we could have the data to analyze and demonstrate if the votes are the expression of a larger community or, as I suspect, are just the result of twitter reactions and push from marketing efforts of few companies. My gut feeling is that with thousands of proposals nobody has the possibility to read and vote them all. So the proposals voted are necessarily only a fraction.  Also, of all the people rotating around OpenStack, few of them vote. This means that
few people find some talk proposals. Which talks are more popular? I notice how well organized are companies like Rackspace, Red Hat, Mirantis, IBM, Tesora and few others blogging about their proposals when time comes. Maybe if the data show that the talks from employees of those companies are the most voted probably we can infer that you either push your talks or your talk doesn’t get voted.

Am I suggesting getting rid of voting all together?  No, that’s not what I’m advocating. I think the voting process is valuable for the summit as a whole and for the community as a whole. It’s a ritual, it’s a celebration, it’s a preparation to the event, it’s a collective, fun activity that we repeat every six months. The voting process is not broken and needs no fixing: it’s great. Only the results IMO are useless for selecting good content at the Summit.

A new push for OpenStack public clouds?

Monty “mordred” Taylor just announced that he’s leaving HP and going to work at IBM. Usually something like this wouldn’t deserve more fanfare than the twittersphere explosion already in act. In this case, I think the announcement is more important than just an OpenStack board member and technical leader changing employer.

Monty says on his blog that he is leaving HP because he wants to build public clouds, implying that he can’t do that at HP. At IBM instead he’ll be focusing on a strong OpenStack-based public cloud, to compete head-to-head with Amazon (and surpass it).

His words confirm the impression I had when analyzing the competitive landscape of public clouds for DreamHost. HP clearly is targeting the enterprise market, with their public cloud used mainly as a supporting mechanism for the private clouds.

I think OpenStack will benefit from more focus on public clouds: I have the feeling those are taken for granted, since there are working groups for pretty much anything but for public clouds. And all operators running large clusters have nightmare stories instead. Hopefully lots of positive changes aimed at public cloud users will keep going upstream (and we can avoid creating yet another working group in openstack-land).

OpenStack is as start-up friendly as anything

I read someone complaining about OpenStack being not friendly enough to startups one time too many. Latest post by Rob Hirschfeld 10 ways to make OpenStack more Start-up Friendly made me want to respond. I just can’t stand the Greek chorus of complaints, especially from someone who sits on the board and can actually push for changes where necessary. I promised I’d spend time on his 10 points and here is my take on each of them:

Accept companies will have some closed tech – Many investors believe that companies need proprietary IP. An “open all things” company will have more trouble with investors.

Why is this an OpenStack problem? If the VC industry have a problem with open source then the problem is of open source as a whole or of the VC (depending on your point of view). Maybe customers prefer to buy open source based solutions instead of buying proprietary code from small startups. This is not an OpenStack issue.

Stop scoring commits as community currency – Small companies don’t show up in the OpenStack committer economy because they are 1) small and 2) working on their product upstream ahead of OpenStack upstream code.

Code is the most valuable currency in an open source project, there is absolutely no way OpenStack should be any different. Companies small and big will be contributing compared to their size and code is how a small company doing great work can gain more influence than a large company doing little stuff. If you’re saying that the community shouldn’t put first and foremost the “top ten” charts of companies contributing (like stackalytics does), then I agree with you. The Foundation celebrates individual contributors to the release and only counts company. This is not an OpenStack problem. Maybe it is a stackalytics problem and one of the reasons I prefer to partner with Bitergia for the community dashboard.

Have start-up travel assistance – OpenStack demands a lot of travel and start-ups don’t have the funds to chase the world-wide summits and mid-cycles.

OpenStack has already has addressed this problem with the Travel Support Program. In Vancouver the Foundation spent around $50k to send about 30 people from all over the world, from startups and students. If that’s not enough, I’m sure more money can be raised for that. This is not an issue, there is a solution in place already.

Embrace open projects outside of OpenStack governance – Not all companies want or need that type of governance for their start-up code base.  That does not make them less valuable, it just makes them not ready yet.

I don’t even know where this comes from: is anybody forcing anyone to host code on git.openstack.org/openstack namespace? And isn’t the OpenStack project offering for free its resources to host code in our systems, without imposing governance or rules with git.openstack.org/stackforge? If there are companies interested in getting under the OpenStack governance, it’s their choice and based on my knowledge they choose because they get business value off of it. This is a non-issue.

Stop anointing ecosystem projects as OpenStack projects – Projects that are allowed into OpenStack get to grab to a megaphone even if they have minimal feature sets.

Even if this was a problem, and I don’t think it is, it should work in favor of startups: many of them have nothing to show but good intentions, for which a megaphone is exactly what they want and use. This is a non-issue.

Be language neutral – Python is not the only language and start-ups need to make practical choices based on their objectives, staff and architecture.

Nobody forces anyone to become an OpenStack official project (which requires some level of standardization). It’s a choice. And, in any case, there is a lot of javascript and ruby in OpenStack, with pieces in Go also coming. This is a non-issue.

Have a stable base – start-ups don’t have time to troubleshoot both their own product and OpenStack.  Without core stability, it’s risky to add OpenStack as a product requirement.

This is a tautology also it’s something that is being constantly advocated and keeps on improving.

Focus on interoperability – Start-ups don’t have time evangelize OpenStack.  They need OpenStack to have large base of public and private installs because that creates an addressable market.

So let me get this straight: IBM, EMC, Cisco are scooping up the first waves of startup that tried to build a product on the rudimentary OpenStack. Such big guns have the clout to create that large addressable market, lending their credibility to OpenStack as a whole. They also pay good checks to the OpenStack Foundation to power its awesome marketing machine.  This is good for startups: they ride on a wave created with someone else’s money.

Limit big companies from making big pre-announcements – Start-ups primary advantage is being a first/fast mover.  When OpenStack members make announcements of intention (generally without substance) it damages the market for start-ups.  Normally corporate announcements are just noise but they are given credibility when they appear to come from the community.

Yeah, right, like you can really put on a rule against vaporware. It’s the way this market has worked and will continue to work this way. Startups, in any field have to learn how to live with it. This is not an OpenStack issue.

Reduce the contribution tax and patch backlog – Start-ups must seek the path of least friction.  If needed OpenStack code changes require a lot of work and time then they are unlikely to look for less expensive alternatives.

Here I guess you’re talking about contributions to existing OpenStack projects, like Nova, Neutron and the like. If you think that some company can innovate fast on Nova while keeping the interoperability and stable release you talk about above then you’ve managed to confuse me. How realistically can you have some startup rip Nova apart and replace parts of it (all of it?) with the greatest big thing and keep the thousands of users out there with a happy upgradeable path, interoperability and stability? This is just impossible to reconcile. Startups cannot innovate on something that is mature and in production. It would be like asking Apache HTTPD to be something else. Guess what: nginx happened outside of Apache and it’s only natural. As a parallel to OpenStack, if someone comes up with a better Neutron, written in Go or Rust, and wide support I’m ready to bet it will be admitted in the big tent rapidly.

Let me tell you why I think that OpenStack is at least as friendly as any other business environment, and maybe more:

  1. Corporate sponsorship for startups is low, a lot lower than for big corporations. And there are ways to lower the admission price even further (just ask).
  2. The ecosystem is now so huge that cool startups innovating on the edges can get exposed to potential customers, investors and buyers very quickly (the megaphone works).
  3. The ‘cloud’ space is so rapidly changing that the big guys cannot keep up and count on startups to do the risky experimentation. There are lots of big companies in OpenStack, I suppose there are opportunities to find good contacts.
  4. The OpenStack Summits have a whole track dedicated to startups, with talks about funding, business, strategies, acquisitions and more.

And finally a reminder: all startups operate in extremely unfriendly environments, most startups fail for various reasons.

The recent exits of companies that innovated on the edges when OpenStack was held together with shoestring and spit as glue are to me a confirmation that the edges where innovation can happen are just moving outside of Nova and Neutron. It’s just normal and to be expected with the maturity of the project.

Let’s celebrate and be happy for Piston and Blue Box and the others. Startups will always be welcome and will always find a good home in and around OpenStack.

A week with Dell XPS13 9343 (2015 model)

Last week I have received the new Dell XPS 13 from the Sputnik program, the one with Ubuntu pre-installed. I wanted to vote with my wallet since I believe Ubuntu is a pretty solid desktop environment, on par with Mac OS X and various Windows.

Design-wise  Dell has produced a very good looking machine, nothing to say about that. Kudos to Dell’s team on designing something much prettier than a Macbook Air. The screen with no border is fantastic with almost no frame and it’s great to look at it.

The only complaint I have towards Dell are the options they picked for the Sputnix program: the only way not to get a touchscreen is to buy a severely limited machine with only 128MB disk. No way. All the other options force you to spend money and sacrifice battery power on a useless feature.

I think touchscreens are uncomfortable to use on desktops and I said so a long time ago. Unless the desktop OS is radically re-designed for touch and hand gestures on the monitor, it makes no sense. I would have never bought the touchscreen if Dell had offered a 256MB option with the regular monitor.

On the Ubuntu side there are quite a few glitches like this issue with the cursor becoming sticky on some applications even if the touch-to-click on the touchpad is disabled and some difficulty to adapt to the ultradense display. By the way, that’s another reason not to get the touchscreen: lower resolution is good enough on such a small laptop anyway. Installing Ubuntu Vivid also was a bit more painful than I thought.

All in all, I didn’t return the laptop as I thought I would, mainly because I needed to upgrade to a machine with 8GB rapidly.