Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh, that as not been my experience at all.

Too be fair, its been something like 10 years iirc.

The database in question was MySQL 8, running on plain old enterprise ssds (RAID 10)

The workload was processing transactions (financial payments)

The database schema was ... Let's call it questionable, with pretty much no normalization because "it's easier when we look at it for debugging", hence extremely long rows with countless updates to the same row throughout the processing, roughly 250-500 writes per row per request/transaction from what I recall. And the application was a unholy combination of a PHP+Java monolith, linked via RPC and transparent class sharing

DB IO was _never_ the problem, no matter how high qps got. I can't quote an exact number, but it was definitely a lot higher then what this claims (something like 40-50k on average "load" days like pre Christmas etc)

Not sure how they're getting this down to ~250qps, it sounds completely implausible.

Heck, I can do single row non-stop updates with >1k qpm on my desktop on a single nvme drive - and that's not even using raid.



Using your numbers and speaking roughly, if you're doing 50k rps and 500 typically to the same row wasn't your contention around 1%? TigerBeetle clais to be able to handle work loads with very high contention, for example 80 or 90% of transactions filtering out a fee to a single account to make up a use case.


Ah, you might be right!

Contention is what can be parallelized, right?

So with roughly 100-200 requests/s you end up with 1-0.5 contention if I understood that right.

That moves me even further towards agarlands points, though - if I plug that into the equation, I end up with >50k qps.

The used numbers create an insanely distorted idea wrt real world performance




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: