I was on the "Ecosystem" track session selection team for Drupalcon London, which motivated me to finally do some more analysis on the traditional pre-selection session voting. Specifically, I wanted to compare the votes a session receives against the evaluations submitted after the conference.
By the way, if you have the opportunity, I highly suggest going to a Drupalcon; they are always great events.
Here are some conclusions based on analysis of the evaluation and voting data from DrupalCon Chicago:
- Voting was not a useful predictor of high quality sessions!
- The pre-selected sessions did not fare better in terms of evaluation than the other sessions (though they may have served a secondary goal of getting attendees to sign up earlier).
- We should re-evaluate how we do panels. They tend to get lower scores in the evaluation.
- The number of evaluations submitted increased 10% compared to San Francisco, which seems great (Larry Garfield theorizes it is related to the mobile app, I think there are a lot of factors involved)
Is voting a good way to judge conference session submissions?
Drupalcon has historically used a voting and committee system for session selection that is pretty common. This is also the default workflow for sites based on the Conference Organizing Distribution.
Typical system:
- Users register on the site
- They propose sessions (and usually there is a session submission cutoff date before voting)
- Voting begins: people (sometimes registered users, sometimes limited to attendees) can vote on their favorite sessions
- During steps 2 and 3, a session selection committee is encouraging submissions and contacting the session proposers to improve their session descriptions