In this post, I’ll try to give some insights into the more recent work and workflows of the global State of the Map (SotM) program committee. After having been a member of the SotM program committee for the last couple of years, I figured this might be useful or at least interesting for other program committees or content teams.
Please note that the views and experiences expressed in this post are my own and that they are mostly based on my memory. I’m also trying to just describe how we worked, which may not be the best way, maybe not even a recommendable one, but one that seemed to work for us. Just because it worked for us doesn’t mean it will work for others though, and vice versa.
In some way, this post is also a follow-up to a previous post of mine where I wrote about the software and services behind the State of the Map. I’ll try to avoid duplicating content, so see the previous post for more information about the tools and services that were used.
Organisation
As long as I’ve been on the program committee, it was always organised such that it was mostly split into a ‘core program committee’ (I’ll refer to it as ‘core team’ in the rest of this post) and the full program committee (I’ll refer to it as ‘program committee’ in the remainder of this post), which included all members including the core team. I don’t remember if this split was a conscious decision, if it just emerged based on asking the program committee members about who wanted to help with which tasks, or if there was another reason. But at least so far, we haven’t had a reason to change this.
Program Committee
The primary tasks of the program committee were reviewing and rating the submissions of the talks, workshops, etc. and providing feedback on e.g. the draft for the call for participation before it got published. Once the call for participation was published, the program committee members were also encouraged to announce it in their local communities.
Core Team
In addition to the above, the core team also drafted the call for participation, was responsible for the coordination with the SotM working group (SotM WG), prepared the actual conference schedule/program and communicated with the speakers. Based on this definition, I can be considered to have been a member of the core team.
Phases
The journey from drafting the call for participation to actually doing the conference was comprised of multiple, more or less distinct phases. I’ll roughly describe them in the subsequent sections in a chronological order, but some of the phases also tended to overlap.
Call for Participation
One of the first tasks was drafting and publishing the call for participation. Those who read the call in the last few years probably noticed that it was usually more or less the same content - at least in the most recent years, we usually actually used the call from the previous year as a basis and applied changes to it based on new experience or ideas. This ‘reuse’ isn’t set in stone though, so we might eventually do a more fundamental overhaul. The drafting process itself was usually performed in a collaborative online editor while we were in a conference call or, if I remember correctly, at least once even during a face-to-face meeting.
Review
Once the deadline for the submission of talks, workshops, etc. was over, the review phase started. Reviewing a submission meant giving it a rating and ideally also giving a short, personal, internal comment about the talk. We usually had four possible ratings, namely 0 (“no, does not fit”), 1 (“I don’t like but if you want”), 2 (“good”) and 3 (“excellent, we have to”). The comments were primarily used to make a decision in cases where several talks had roughly the same rating but we could not take all of them because there was e.g. not enough space or too much content overlap between different talks. They were also useful to give further hints, information or options for a talk, which was tremendously useful when making the final decision and creating the actual schedule.
So far, we didn’t have an explicit minimum number of reviews per submission or reviewer, both merely were ‘as many as possible’, which has worked reasonably well.
Acceptance/Rejection of Talks and Creation of Schedule
Eventually, we had to finally decide which talks we accept and which ones we sadly have to reject and how to schedule the accepted ones. For that, the duration of the conference and the number of rooms gave an upper limit for the number of talk slots that were available. Since there were also opening and closing sessions and usually a couple of lightning talk sessions as well as the academic track, the number of talks we were able to accept was actually smaller than this. Once this number was determined, one could basically sort the submissions by the average rating and accept as many talks as we had slots for. In practice, it was a bit more complex than this and we might even have had to reject submissions with a good rating if there was too much overlap with another submission or if there was a ‘conflict’ with the rating criteria that we published as part of the call for participation. This could for example happen if too many people from the same organisation had submitted talks. Submissions with the same or very similar ratings around the ‘cut-off’ also got more attention. These were the cases where the comments in the ratings were especially useful. Sometimes, these comments also lead to a ‘downgrade’ of a regular talk to a lightning talk - or the ‘upgrade’ of a regular talk to the keynote presentation.
After we more or less decided which talks to accept, which ones to reject and updating the state of the talks in the conference planning system (Pretalx) accordingly, the building of the schedule started. The order of these steps probably varied a bit from year to year as did the way we built the schedule: We either did this in a face-to-face meeting or online during a conference call. The first draft of the schedule was either done on a physical or a virtual whiteboard using virtual or physical cards, respectively. These cards contained some information about the talks like the talk title, speaker, rating, track, reviewer comments, etc. Once we were happy with the schedule, we ‘transferred’ it back to Pretalx. If there was a conflict between our draft schedule and the availability of a speaker, we tried to resolve that within Pretalx, which detects these issues. Afterwards, we created a first ‘beta’ release of the schedule and let Pretalx send the notification emails to speakers which informed them about the acceptance or rejection of their submission and about the time when their session was scheduled.
Accepted speakers then had to confirm their session(s), so the ‘official’ publication of the schedule was usually postponed a bit until we got these confirmations. In anticipation of speakers who (had to) cancel their session for some reason, we always kept some ‘backup talks’ - these usually were talks that we would have had to completely reject because their rating was below the cut-off, but which we nonetheless liked or where we assumed that the speaker would likely be ok with that - and where they were most likely going to be attending the conference even though their submission was initially rejected and they might thus only be able to hold their session if another speaker cancels. For these backup talks, we usually edited the rejection notification emails in Pretalx before we sent them and asked the speakers if they would be ok with potentially still doing their talk if we needed them as a ‘backup’.
Speaker Information and “Support”
Especially during the purely virtual SotM conferences, we also had to disseminate quite a bit of information to the speakers, for example how to produce and submit their pre-recorded talks, how the Q&A sessions work, etc. Speakers also might have questions, for example about the infrastructure that is available at the conference venue or some other organisational details. For the dissemination of information, we again used Pretalx which has useful filters for this (i.e. to select the correct recipients from the people known to Pretalx), and questions from the speakers were collected in a ticketing system. This was useful to make sure no message got missed or replied to more than once. Since not all speaker questions were sent to the email address of the ticketing system but to the email address of the SotM working group mailing list instead, we also participated in that list, directly responded to speaker questions there and tried to guide the speakers to the ticketing system. Some of this also required coordination with the SotM working group and the local team. Occasionally, there were also changes to accepted and confirmed submissions, for example, the time when a speaker was available might have changed. In that case, it was then also required to move around some other talks in the schedule, resolve potential new conflicts and inform the affected speakers.
Conference
Once the conference started, the work of the program committee was pretty much done, except for some last minute questions or, unfortunately, the occasional last minute talk cancellation which required at least a schedule update and publication. Apart from that, this was the time where one was finally able to watch all the exciting sessions one saw while reviewing the submissions, enjoy the conference, meet fellow program committee members, old and new friends and of course many other people with an interest in OpenStreetMap.
Since there was no global SotM in 2023, the program committee was also mostly on standby since SotM 2022 in Firenze ended, but we will continue with our work in early 2024 by starting to draft the call for participation at SotM 2024 in Nairobi, Kenya.