Summmer of Code with Kiwix image

The application period for the Google Summer of Code (GSoC) is over. We received 64 candidates (only 3 of whom were female), slightly down from last year (around 85). Apparently, adding “AI” to an org’s name or project title is the modern equivalent of putting “free pizza” on a flyer, and a few other orgs saw their number of applicants skyrocket. That’s the current meta.

This year’s hopefuls

Our requirement for applications to be considered (or even reviewed) was simple: submit at least one Pull Request during the application period (ideally merged). The bar was low, clearly communicated, and repeated everywhere we could.

Still, 42 candidates didn’t clear it. Only 22 did. At this point, it’s less a filter and more a personality test.

A breakdown of applications at the 2026 GSoC

And yes, if it looks like many failed applications came in during the last three days, that’s because 85% of them did (along with 46% of the successful ones during the same period).

A few honorable mentions:

  • Two near-identical proposals (we admire the teamwork, if nothing else);
  • Three applicants who forgot to change the name of the organization they were applying to;
  • A noticeable amount of AI-generated content that managed to be both verbose and unconvincing: an impressive combo.

Among our applicants, one country clearly stands out: India.

 

A world map showing the location of GSoC candidates

This does not in any way mean that the next round of GSoC participants will be Indian. In fact, none of last year’s three interns were from South Asia. In 2024, the region was represented by a student from Sri Lanka. But credit where it’s due: this country consistently produces a large pool of highly motivated developers willing to compete.

Sankeymatic graph for countries of origin, pass/fail status and type of project submitted

At the beginning of the year, we came up with five project ideas we thought would be solid starting points. But we also said: you’re welcome to propose your own. About a third of project ideas ended up being suggested by the candidates themselves, which was great to see but, again, no guarantee of selection in the next round (but it’s nice to read “I understand what you’re building, and here’s how I’d make it better.”).

So what’s next?

That was the easy filtering step. We clearly communicated our expectations, yet nearly two-thirds failed to meet them.

Over the next couple of weeks, the Kiwix team will go through the remaining candidates with a fairly simple lens:

  • “We know them, they’re good.”
  • “We don’t know them, PR was just performative.”
  • “We know them, and the experience was not great.”

Needless to say, we will only move forward with candidates who are known quantity (in a good sense)!

Mentors will then decide whether they actually want to mentor someone (it’s a volunteer job, after all, so no obligation there). After a couple of months dealing with dozens of PRs, many of them low quality, it’s understandable if enthusiasm has waned.

If (and only if) mentors are still willing, we will then look at project proposals:

  • Is the scope realistic?
  • Does it fit our priorities?
  • Do we trust the person to deliver?

Stay tuned.