[OSGeo-Conf] [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

María Arias de Reyna delawen at gmail.com
Thu Jan 13 07:22:34 PST 2022


On Thu, Jan 13, 2022 at 1:13 PM Jonathan Moules via Discuss
<discuss at lists.osgeo.org> wrote:
> I don't think there's any need to reinvent the wheel here; a number of open-source initiatives seem to use scoring for evaluating proposals. Chances are something from one of them can be borrowed.
>
> Apache use it for scoring mentee proposals for GSOC: https://community.apache.org/mentee-ranking-process.html
>
> Linux Foundation scores their conference proposals for example: https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/scoring-guidelines/

Am I understanding it wrong or this is to accept talk proposals, not
conference proposals?

Scoring a contractor for a well defined project (as you pointed public
administrations do), choosing the right person for a specified job, or
deciding if a talk deserves to be in a schedule is more or less "easy"
compared to decide who is hosting a conference.

If you want to propose a draft of score requirements for FOSS4G, I
think it would be interesting to go through them and try to come up
with something. Even if the scoring is not binding, it may help future
proposals see what is the path.

My only "but" with this system (which I use almost always when I have
to review anything and I intended to use for this FOSS4G voting) is
that it is hard to come up with an objective system that counts all
the variables. And if the score does not match the final decision, it
may be difficult to process.

I have been on the GSoC as mentor with the ASF and true, we have a
ranking process, but it helped us mostly to order the candidates and
reject those that deviate too much. The final decision was not a clear
numeric decision. When the difference is small, you do have to
consider other things. And from what I have seen these past few years
on FOSS4G, either there is one candidate that outshines obviously, or
the difference is really small between candidates and it comes down to
things that may not be even defined on the RFP.

And there's things you have to consider that a generic scoring system
can't help you with. We used this system in FOSS4G 2021 to decide
which talks to accept on the conference, where the community voting
had a strong weight but was not binding. And we had to make some
exceptions with good talks that were experimental but didn't get a
good score and objectively numerically they were rejected. We also had
to reject some duplicated talks that had a high score but we couldn't
argue both were accepted. Which one to reject? Usually the one that
had a speaker with more talks. But what if both have a speaker with no
more talks? That's something you have to check case by case.

Which leads us that with the scoring there is less room for
experimentation because the candidates will focus on getting high
scores on specific questions. Not on offering what is their best. For
example, the proposal we made for FOSS4G Sevilla 2019 in a pirate
amusement park to celebrate Magallanes... no score could have
predicted that.

So I may agree on scoring, not on binding scoring.

But first we need some draft to work on to score proposals :)


More information about the Conference_dev mailing list