Hacker Newsnew | past | comments | ask | show | jobs | submit | wilun's commentslogin

Are you even trying?

A random search tells me that "The mean DD for the studied sample of projects is 7.47 post release defects per thousand lines of code (KLoC), the median is 4.3 with a standard deviation of 7.99." ( https://ieeexplore.ieee.org/document/6462687/ )

So clearly if you are careful and use state of the art practices, this is very doable.

Not only this is doable, but various individuals and teams in history have been able to reach way lower defect densities. Hey, for all practical purposes, TeX is bug free, for example.

If you are not able to write 100 lines of useful code without a bug in it (not in an infallible way, but at least sufficiently often enough), maybe you should simply study and practice to get that ability.


Those measurements inherently make no sense as you can't know unknown unknowns. Sure, for all intents and purposes if you never encounter a particular defect in a billion years of usage then a bug may as well not exist, but that doesn't mean it doesn't.


Those measurements inherently makes more sense than hand-waving; and although mathematically I agree with you, the world is not mathematically pure.

Regardless, I stand that implying that it would be exceptional to be able to write 100 lines of bug-free useful code is ridiculous. I'm not stating that it is easy, nor that most of chunks of 100 lines are written like that. Just that not only this is possible, but this is accessible. Now depending on the field it might be more or less difficult, but in general I suspect there are tons of chunks of 100 lines that have been developed correctly on the first try, and those metrics tends to, non-formally I concede (but if you dig enough what is even formal enough?), weight more in favor of my view point than in favor of the difficulty level being astonishingly high.


If the only alternative is between a rewrite, and not-fixing the mess gradually, then I'll take the rewrite anytime and let the cycle continue.

The problem is: is your rewrite really going to be a full-rewrite, or some kind of hybrid monster (at the architectural level, of course, there is no problem in reusing little independent pieces, if any exist)? Because you can easily fall in all the traps of both sides, if the technical side is not mastered well enough by the project management...


That's fair, but one has to remember that some of the key points are to be balanced, and that, like you said, "the quality bar needs to be high".

And I'm more for prioritizing trying to not introduce bugs than to fix all the old ones. Which is challenging on, how could we call that?, "legacy" software. So that priority can and must be reversed temporarily when that "legacy" is too much.

So it's all very context dependent, and not having anybody (or too few) working on making things better when its needed is not going to deliver any kind of velocity in the long term (and probably the short term velocity is already way too low in those cases). Too bad for the mythical time to market...

So you have to be able to say no to bugfixes, but you certainly also have to be able to say no to the eternal rush of new half-backed features, when needed. A short-term ever obsession on "opportunity cost of not working on a feature" could yield quite paradoxical results if trying to build them on some kind of zombie legacy code (that is only ever edited with disgust and great difficulty, but never seriously refactored).

Not only this balance is hard to achieve, but your role as a senior tech lead and project manager is certainly to consider carefully the cleanup needs, and be an advocate for them when needed, including by pushing back against feature creep pressure. Because if you are not, most of the time nobody else will. As a tech lead, this mean among other things, that a black box approach of parts of the maintained software is out of the question (of course you can delegate, but even then its imperative to stay in the equation for that purpose, only with less details). Paradoxically, even if the quality is crap and the organization notices and tracks loads of bugs, most people will be happy at the moment the bugs are triaged and assigned and eventually "fixed" by more horrible garbage (that is, the impression that something is done), rather than doing the right thing that is to organize a cleanup of the software more in depth.

I've got the impression that it is rare to find projects where this balance is achieved correctly, but maybe it's only because of bad luck. Well in lots of cases, the famous ones (I'm thinking on the level of Linux, Firefox, Python, etc. not just your random niche software) are actually not that bad, and their competitors have a way shorter lifespan when not as balanced...


This is my experience as well. Product progress is prioritized over code quality, and the ultimate cost to velocity is real but often unacknowledged. Bugs go unfixed because they are very hard to fix, and this high cost is taken for granted.

It’s definitely on the technical leadership to stand up for code cleanliness and push back against product, and on the higher-ups to recognize the importance of this dynamic.


Oh the good old IE argument "but it is technically integrated to the OS and providing all kind of essential services, so we are not abusing our monopoly"

While it is not (can be removed/replaced, the limitations preventing to do that are completely artificial and this is probably playing a good role in what has been judged), and even if it was, things should have been bundled differently to begin with (if they can't, that can be considered a conscious decision potentially motivated by a desire to abuse a monopoly, so in all cases that should be redesigned)

So it's mostly same cause/same effects from an high level overview -- and I'm not surprised. Maybe the way to become compliant (after their pointless whining phase has passed) will even be similar? I'm not buying the business model argument. Google browser, play store and so over are now extremely well established and won't be abandoned by any kind of mass exodus any time soon. In ten years, they can be challenged, but that's the fucking POINT: practical competition should be allowed.

It's astonishing that everybody and their dog was scandalized by MS behavior in the time (and some even are today, despite present MS being quite different from the old one), while Google has somehow managed to be considered friendly regardless of the doing exactly the same shit, if not worse, while simultaneously even pretending that they are not evil. Well maybe evil is a strong word, and I can concede that they did not pretend they are not hypocrites :p


Playing devil's advocate here. I feel like Play Services is necessary evil. This is the only thing that's keeping the ecosystem from fragmenting further. Look at the OEMs update cycle. If not for Play Services which are updated independently from Android OS itself, app compatability would be a nightmare. There is nothing to replace it with. Nokia tried and failed.

Other thing is, if every OEM starts writing their own API for these services, app developers will have to write apps for each OEM because they for sure will not work with each other. We will go back to the days of Symbian where apps will come with a huge list of phones it is known to work with.


The meaning of "defensive programming" is highly contextual even within a given "field". Also, major influential parts of the industry (last example in date being the C++ committee) are moving away from indistinct checks and exceptions, and toward contractual programming.


> Only a debugger can show you the steps and conditions involved that lead to that invalid value.

No. Static analysis can and is actually what is used most of the time. Either with an automated tool, or with your brain.

Of course debugging is also needed, but the right mix between the two is needed, and you are actually doing it even if you use a debugger a lot (at least I hope so, otherwise you are probably not fixing your bugs correctly way too often).


But is not breaking into a debugger and seeing the call stack often faster? Brain won't help you much if you have thousands of files and more than 100 000 LOC and with the debugger you just walk up the stack until you see where that NULL came from. And after fixing the bug, of course you can use your brain and search for similar code patterns to make sure that there is no same mistake anywhere else.


While we probably will always have to debug, and if not the "code" it will be a specification formal enough so that it can be considered code anyway, there are different ways to approach its role within the lifecycle of software development: on the two extremes, one can "quickly" but somehow randomly throw lines without thinking much about it, then try the result and correct the few defects that their few tries reveal (and leave dozen to hundreds of other defects to be discovered at more inconvenient times); or one can think a lot about the problem, study a lot about the software where the change must be done, and carefully write some code that perfectly implement what is needed, with very few defects both on the what and on the how side. Note that the "quick" aspect of the first approach is a complete myth (if taken to the extreme, and except for trivially short runs or if the result does not matter that much), because a system can not be developed like that in the long term without collapsing on itself, so there will either be spectacular failures or unplanned dev slowdown, and if the slowdown route is taken, the result will be poorer as if a careful approach would have been taken in the first place, while the velocity might not even be higher.

Of course, all degrees exists between the two extremes, and when going too far on one side for a given application is going to cause more problems than it solves (e.g. missing time to market opportunities).

Anyway, some projects, maybe those related to computer infrastructure or actually any kind of infrastructure, are more naturally positioned on the careful track (and even then it depends on which aspect, for ex cyber security is still largely an afterthought in large parts of the industry), and the careful track only need debugging as a non-trivial activity when everything else has failed, so hopefully in very small quantities. But it is not that when it is really needed as a last resort, good tooling are not needed. It is just that it is confined to unreleased side projects / tooling, or when it happens in prod it marks a so serious failure that compared to other projects, those hopefully do not happen that often. In those contexts, a project which need too much debugging can be at the risk of dying.

So the mean "value" of debugging might be somehow smaller than the mean "value" of designing and writing code and otherwise organizing things so that we do not have to debug (that often).


Posix TTY and more precisely stdin/stdout/stderr inheritance and internals of FD have a completely insane design. There is the famous divide between file descriptors and file descriptions. Hilarity can and will ensue in tons of domains. I nearly shipped some code with bugs because of that mess (and could only avoid those bugs by using threads; you can NOT switch your std fd to non-blocking without absolutely unpredictable consequences), and obviously some bugs of a given class can create security issues. Especially, and in a way, obviously, when objects are shared across security boundaries.

Far is the time when Unix people were making fun of the lack of security in consumer Windows. Today, there is no comprehensive model on the most used "Unix" side, while modern Windows certainly have problems in the default way they are configured, but at least the security model exist with well defined boundaries (even if we can be sad that some seemingly security related features are not considered officially as security boundaries, at least we are not deluding ourselves into thinking that a spaghetti of objects without security descriptors can be shared and the result can be a secure system...)


There is a model, it's just not particularly well publicised: a file descriptor is a capability.

That's it.


Is it efficient and sufficient though? And can and do we build real security on top of it?

This issue shows systems have been built for decades with blatant holes because it was not taken into account in even core os admin tools.

There is the other problem corresponding to the myth that everything is a fd. Which has never been true, and is even less and less as time passes.

Also, extensive extra security hooks and software using them are built, but not of top of this model.

Finally, sharing posix fd across security boundaries often causes problems because of all the features available for both sides, for which the security impact are not studied.

A model just stating that posix fd are capa is widely insufficient. So if this is the only one, even in the context in pure Posix we already know this is an extremely poor one.


I'm not 100% sure if you tried to be ironic or if you really reported that the video was better in 8k than FHD.

Because actually, it can be.

Although 8k is overkill, 4k will be enough, and 1440p nearly ok on your old 1024x768 monitor. Typically video encoding does some subsampling on some color components. If you play 4k content on a FHD screen, the quality can be better because you will have no subsampling on your FHD screen, compared to mere FHD encoding (in most cases).


It was a stab at irony.

True, but the video is already subsampled. That's how it was able to be uploaded at 1080p at all, since the source video is 8k. So 8k vs 1080p shouldn't make any difference on monitors less than M-by-1080 resolution.


The video is typically subsampled at encoding at capture resolution, but it is also subsampled at other encoding resolutions. Because the whole point of subsampling is to be taken into account during encoding, and encoding itself needs not to vary depending on whether the source was downscaled or not.

So video codecs most of the time work with some subsampled chroma components. So your encoded 1080p might be able to render after decoding only e.g. 540 lines of those components, while with the 4k stream it might be: 2160/2 => back to 1080.

Edit: but to be clear, I'm not advocating for people to choose 2x stream and start watching 4k on FHD screens in general, that would be insane. Chroma subsampling is used because the eye is less sensitive to those colors.


I would be interested in a double blind experiment confirming that their specific implementation of chroma subsampling is even detectable. The eye is much less sensitive to colors than intensity, as you point out. If it were perceptible, I think the codec designers wouldn't feel it was an acceptable tradeoff.

So your encoded 1080p might be able to render after decoding only e.g. 540 lines of those components, while with the 4k stream it might be: 2160/2 => back to 1080.

I'm not sure that's accurate -- whatever downscaling process was used to convert from 8k to 1080p on Google's servers is probably the same process to convert from 8k to 1080p in the youtube player, isn't it? At least perceptually.

I would agree that if they convert from 8k (compressed) to 4k (compressed), then 4k to 1080p (compressed), then that would introduce perceptible differences. But in general reencoding video multiple times is fail, so that would be a bug in the encoding process server side. They should be going from the source material directly to 1080p, which would give the encoder a chance to employ precisely the situation you mention.

Either way, you should totes email me or shoot me a keybase message. It's not every day that I find someone to debate human perceptual differences caused by esoteric encoding minutiae.


It's not just that the eye is less sensitive to chroma.

Although your 4:2:0 subsampled 1080p video only has 540x960 pixels with chroma information, the decoder should be doing chroma upsampling, and unless its a super simple algorithm it should be doing basic edge detecting and fixing the blurry edges chroma subsampling is known to cause. I posit that even with training, without getting very very close to your screen you wouldn't be able to tell if the source material was subsampled 4:2:0, 4:2:2, or 4:4:4.

The truth is that generally people DO subjectively prefer high resolution source material that has been downscaled. Downscaling can clean up aliasing and soften over-sharp edges.

People who watch anime sometimes upscale video beyond their screen size with a neuron-based algorithm, and then downscale to their screen size, in order to achieve subjectively better image quality. This is even considering that almost all 1080p anime is produced in 720p and then upscaled in post-processing!


It will make different if the encoding compression is different. Not all 1080p streams are equal. A 1080p FHD blueray is around 30mbps. I've read 20mbps h264 being almost indistinguishable from 30mbps blueray. In my own personal test using some Starwards bluerays, a 10mbps looks pretty good compared to the blueray. On YouTube I've seen anywhere from 2-4mbps being used for 1080p and 7+ used for 4k.

A 4k or 8k stream is coming into your computer at 10+mbps and being downsampled to 1080p can very contain more information than a lower quality 1080p stream coming into your computer at 4mbps even after downsampling.


> Tesla drivers have familiarized themselves with the limitations of the system

and then one day you get an update that breaks your mental model you have been familiarized with, your Tesla crashes and you die.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: