The problem with Github Stars

Most open-source projects today strive to show an idealized Github star history graph. It looks something like this:


If you're an engineer, you're expected to read this as a signal of quality as vetted by a community of your peers. If you're an investor, you’re expected to read this as a sign of traction.

There was a time when you could draw meaningful signals from a graph like this. Unfortunately, that signal has long decayed — especially if an open-source project is anything AI-related.

Signal decay

At the early stage of an open-source project, it can be challenging to signal quality to your users and potential investors. Reaching for the number of GitHub stars as a proxy makes sense. However, as our industry has optimized for Github stars, they have become an inflated popularity metric.

There are two reasons why I discount a project’s GitHub star count. The first is behavioral.

Every popularity contest devolves into a race to game the system. A particularly egregious example is where projects purchase stars. For $64, you can buy a thousand stargazers for your next Github project. Instead of trading dollars for stars, other projects trade stars for stars. These star-for-star exchanges, where project maintainers promise to star your project if you star theirs, are reminiscent of follow-for-follow schemes in social media — the ultimate popularity contest.

The other reason why I pay little attention to GitHub stars is structural. Star counts are a cumulative metric. They’re a total sum over time that only ever goes up and to the right. In practice, no one ever goes back to unstar a project because it is declining in quality. Like every cumulative metric, GitHub stars don't say much about a project's performance over time.

The pressure to show the above graph leads to a hype-driven launch and pump cycle where a project accumulates a lot of stars in its first few months and very few after that.

If you’re managing an open-source project, consider shifting the narrative away from the number of GitHub stars.

How to tell a better story about your early-stage open-source project

Community health metrics tell a better story than the number of stargazers. How many non-founder contributors are there? What's the story behind how your most active contributors found your project? The number of contributors matters less than your understanding of their persona. A good grasp of who is in your community and why they’re there goes a long way toward scaling your project.

Repository health metrics are another set of metrics that resonates with me. What is the contribution velocity from your contributors? How long does it take for a pull request or an issue to be closed? Beyond the sheer number of contributors, it's necessary to highlight how easy it is for those contributors to move the project forward. A good open-source project's design mimics that of a good programming language. I’ve written before on programming language design and have found several of those ideas to apply here. In particular, the idea of having a wide skill range:

Skill range is a concept borrowed from gaming. It has two essential elements 1) the skill floor: how easy it is for new users to get started 2) the skill ceiling: how complex a task a user can accomplish with your product.
- What developer products can learn from language design

The best open-source projects have a low barrier to entry for the average developer and a high ceiling for an experienced developer to move the project forward.

Ultimately, you have to get people to adopt your open-source project. Usage metrics such as clones or downloads are essential to highlight when convincing new developers or investors that you've got a high-quality project worth paying attention to.

It’s refreshing to see a project emphasize any of the above, though I rarely see this. If you’ve seen good examples of this or are yourself an excellent example of this, I’d love to hear from you.