Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Software Forge Performance Index (forgeperf.org)
21 points by mroche on April 10, 2023 | hide | past | favorite | 13 comments


These scores seem not to be well targeted for what's being measured. For example it clearly penalises the number of requests made. All things being equal more requests is certainly more wasteful, but there's always more context. QUIC/HTTP3 are very good at multiplexing requests, smaller requests for single entities can mean improved caching, and global CDNs reduce the impact of requests anyway.

Add to this the fact that almost all usage of apps like GitHub/Lab/etc are _warm-starts_ with filled caches, and these numbers start to feel quite divorced from the reality of the user experience.

It doesn't help that the repository being used is essentially a toy repository with very little code or history, so it's hard to get a sense of how these systems scale. My experience on a ~1m line, ~150k commit repo on GitHub was absolutely fine for example.

This is all just Lighthouse under the hood, which is great for SEO optimisation – it's all about first-landing, cold-start, and ideal if you're running landing pages. But a lot of Lighthouse isn't designed for web apps, with long lived sessions, where the majority of usage takes place with warm caches or uses local navigation with API requests. This feels like both a case of results being taken out of context, and using the wrong tool for the job.


> It doesn't help that the repository being used is essentially a toy repository with very little code or history, so it's hard to get a sense of how these systems scale.

That is the "best case" benchmarks. The "worst case" benchmarks are of the Linux kernel repository.

"It's usually cached" is not an excuse for excessive traffic, especially as a lot of it is dynamic content in the first place.


> "It's usually cached" is not an excuse for excessive traffic,

It does redefine what "excessive" means though. If 10 independent requests makes the code easier to maintain, an the performance hit is a few Ms on average, I wouldn't consider that excessive. If, under the same circumstances, the performance hit was a few hundred ms, it would indeed be excessive.

It's not making excuses so much as pointing out a potential discordance between what is measured and what is assumed based on that measurement.


Further, 10 requests where 8 can be more aggressively cached, may well be better than 4 requests where none can be cached.

Separating out requests isn't just about maintainability, it's also about separating entities with different caching behaviour. We could just do one big request for HTML, but if there's a lot of the page that doesn't need to change, that's wasteful.


Should be pointed out that the site is run by Drew Devault of SourceHut, who may be sitting next to his datacenter while doing these measurements. But nice to see these comparisons.


Fair supposition, but lighthouse controls for latency and throughput and the same tests have been run from various parts of the world by various parties with indistinguishable results. Feel free to run them on your own network, it's pretty straightforward:

https://git.sr.ht/~sircmpwn/forgeperf

Takes about an hour though.


Thank you for that clarification.


The only time I look at the actual web interface for bitbucket or github is during PR reviews. Otherwise I'm either using the respective desktop app or grabbing blames from my editor. So, I'm always locally browsing branches.

So realistically, only Commit (under Browsing Git repositories) and Code review actually matter. Lighthouse metrics don't matter much at that point (if I'm at the PR stage I will put up with an unusual delay to get the PR through).

I think that's what you're seeing with these metrics. Dev's don't care about a repo's browser experience. They care about the terminal/desktop-app experience.


No offense to the Atlassian guys, but how do they justify their horrid overall performance with BitBucket?


They seem to have pivoted BitBucket a few years ago. Rather than being a GitHub competitor, they moved to being the default option for companies that buy into the Atlassian ecosystem. JIRA was always their big product, and they've expanded into a bunch of other JIRA-adjacent products (sometimes via acquisitions) such as Trello and OpsGenie. For a company their size it makes sense to have BitBucket as a box they can tick in sales pitches, but I think they realised a long time ago that they weren't going to win anyone based purely on that.


I think every time I go to use Bitbucket I get an error 500 at least once.


The same way they would justify it for Jira


Just click more buttons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: