How I evaluate submissions for talks at OpenStack Summits

As a track chair, I look for content that will be informative for participants live at the event. I also look for entertaining content, well delivered, clear and usable also as a complement to written documentation, something to be enjoyed also after the event on the video channel.

I’ve been a track chair for OpenStack Summit tracks many times and recent discussions on the community mailing list made me decide to publish the principle that have guided my decisions.

When I’m evaluating a talk I look at:

  • the title
  • the abstract
  • the bio of the speaker

A good title conveys immediately what the talk is about and some idea of the argument or area of discussion. A good abstract expands on the title describing the thesis, the argument and the conclusion, includes also a reason for an identified audience to go see it at the conference, or later on youtube. The bio of the speaker needs to reinforce all of the above, explain why the speaker is the best person to deliver the talk.

By looking only at title, abstract and bio, I discard bad titles, bad abstracts and speakers with a bad bio and who are not known to me, to linkedin, to slideshare and to google in general. Usually that leaves me with twice as many talks as slots available in the track.

The next criteria I use to discard other talks are: does this talk fit with the overall objective of the Summit? Does it fit with the specific objective of the track? For example, the objective of the Tokyo summit is to focus on application developers and containers. And for the How To Contribute track, the colleagues track chairs decided to focus on content that we didn’t hear before or needed update and to give precedence to more regional-specific content. This pass usually identifies a couple of clear winners and 2-3 tracks with very little debate.

Now it’s time for group deliberation to finish the selection, starting from those that gained the majority of support and asking for a dissenting opinion. All of our arguments were around title, abstract and bio of the presenters, those provided more than enough information to make a decision and we never had to use anything else.

We never looked at the public votes because those are easily gamed and I think it would be unfair at this point to give priority to someone only because they work for an organization that can promote talks during the voting process. Each candidate needs to be evaluated based on what they bring to the Summits, not on their marketing teams.

My argument against using the results votes of the public voting process to judge proposals is exactly that, at best, those votes don’t provide any value: bad titles and bad abstracts proposed by non-involved individuals will gather very few votes anyway, but those are very easy to discard for track chairs anyway. So no value here. But once the decent tracks remain under consideration, looking at the public votes result may skew track chair decisions towards the usual people that speak every time, with lots of twitter followers or the ones working for companies with well organized marketing departments.

To me the votes are the result of a popularity contest and if used for anything, they dramatically damage the minorities that are not on twitter, the people who are shy by nature and those working for companies that don’t have a strong social media presence (or don’t use it at all). In fact, I’d argue that the results of the votes should be even hidden in the track chair UI.

I always considered the voting process as a marketing tool for the event, a community ritual, a celebration of the OpenStack community as a whole and not something that the selection committee should use. I find looking at votes extremely unfair to the submitters and diminishing of the selection committee’s role, too. IMO a good committee should evaluate based on quality of content relative to the objectives for that specific summit (overall focus, location), and totally ignore the popularity of their proposers (or their employees).

I wish we could have the data to analyze and demonstrate if the votes are the expression of a larger community or, as I suspect, are just the result of twitter reactions and push from marketing efforts of few companies. My gut feeling is that with thousands of proposals nobody has the possibility to read and vote them all. So the proposals voted are necessarily only a fraction.  Also, of all the people rotating around OpenStack, few of them vote. This means that
few people find some talk proposals. Which talks are more popular? I notice how well organized are companies like Rackspace, Red Hat, Mirantis, IBM, Tesora and few others blogging about their proposals when time comes. Maybe if the data show that the talks from employees of those companies are the most voted probably we can infer that you either push your talks or your talk doesn’t get voted.

Am I suggesting getting rid of voting all together?  No, that’s not what I’m advocating. I think the voting process is valuable for the summit as a whole and for the community as a whole. It’s a ritual, it’s a celebration, it’s a preparation to the event, it’s a collective, fun activity that we repeat every six months. The voting process is not broken and needs no fixing: it’s great. Only the results IMO are useless for selecting good content at the Summit.