Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm talking from a perspective of someone working with QA in my day job. And I do have to answer questions about the quality. Like, "did the quality of the product increase in the last release?" or "is our quality higher than the competition?" or "will this drop in quality be acceptable for the majority of our customers?"

And, really, every time I'm called to answer questions like these, I know full well that no matter how much time I spend analyzing the test results, coverage, test strategies, dissecting JIRA etc. my answers will be based on little more than a guess (and no, it's not the subconsciousness, it just means that I'll be probably wrong!)

I wish I could just "let it go" and observe the gestalt of the product and say lgtm! (or not). Just because my subconsciousness told me it's so. :)

No, it's not like Jenga. It doesn't reinforce each other. There's always a possibility to drill down to details, which makes the discussion and comparison easy (or easier), but the more complex the thing I'm trying to assess the quality of, the worse it gets.

Is ZFS better than Ext4?

Is MariaDB good enough, or should we switch to a more "high quality" PostgreSQL? How about Oracle?

Is Python 3.13 objectively better than Python 3.10?

What about Ethernet vs IB?

Answering any of these questions would get experts twisted in a knot of endless arguments precisely because quality is very hard to assess. It has too many faces, too many metrics...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: