Really, "For reliable software, you need Garbage Collection" is the straw man you're going to attack?
What comes to mind when you hear "reliable software"? Personally, I immediately think of critical embedded systems like rocket and spacecraft guidance and automotive control systems. And one thing I hear repeatedly is that many well-known coding standards for building such critical software prohibit all dynamic memory allocation, because dynamic allocation is a potential failure point.
And of course, if you don't have dynamic allocation, you don't need garbage collection, because it would have nothing to do. So if the software that needs reliability the most often doesn't use dynamic allocation, only the most ignorant could think that you need garbage collection for reliable software.
I think you've taken a pretty narrow interpretation of "reliable software". It seems more reasonable to think that Bjarne was speaking at a general level, i.e., the idea that software written in non-garbage collected languages tends to be prone to memory management errors on the part of the programmer.
This has been a common meme for the past 15 to 20 years precisely because it is so easy to forget when a block of memory needs to be freed. However, the mechanisms of reference-counted smart pointers, RAII, and clearer ownership semantics in the language, go a long way to help mitigate the common manual memory management problems in C++.
The downside, of course, is that you have to know how to use these ideas to write "reliable software", and C++ does not make it easy. It's pretty much impossible to go from reading the standard to implementing correct and optimal C++ programs. There are so many gotchas, corner-cases, and features which require much study and experience to truly understand.
I'm not arguing that you need to avoid dynamic memory management to write "reliable software." I'm just pointing out that a lot of reliable software is written that way, and thus the idea that you need garbage collection to write reliable software is so obviously false that it's silly to think anyone believes it to be true, making it a terrible straw man to argue against.
> I'm just pointing out that a lot of reliable software is written that way, and thus the idea that you need garbage collection to write reliable software is so obviously false that it's silly to think anyone believes it to be true, making it a terrible straw man to argue against.
You're also arguing a straw man. Of course you can write reliable software without dynamic allocation. The question is: can you do it faster and/or cheaper using C++ or $ALTERNATIVE?
(You mentioned rocket and spacecraft guidance software as examples. That's an example of software that's exceedingly expensive to develop... and it doesn't actually do that much even though it's obviously complex.)
You say, "Of course you can write reliable software without dynamic allocation." Why is that "of course," if the myth being addressed is that you cannot write reliable software without garbage collection? If you're saying everyone knows that you can write reliable software without GC, and it'll just be expensive and such, then we're in agreement, because that's exactly what I'm saying.
And when I think of rocket and spacecraft systems (in the context of a discussion around programming) I recall fiascoes such as the Mars Climate Orbiter[0].
If we're going to be thorough about a discussion of reliability in software, we have to include cost in the equation. Rocket and spacecraft systems have extremely high cost per SLOC. It stands to reason that there might be solutions for producing reliable software (for some standard of reliability) far more rapidly and cheaply than the folks at NASA.
Mars Climate Orbiter is an odd choice. There are good examples of expensive software failures in spacecraft systems (the first Ariane 5 launch being a high profile one) and they support your overall point, but MCO wasn't a software failure, but a human failure. Certainly the software could have and should have been better designed to not allow that failure, but ultimately the software behaved exactly as it was intended to, and did exactly what it was told to do.
As far as cost, you're completely right. My point isn't that we should all be building software this way (far from it, most software doesn't need to be that reliable) but rather that when reliability really matters, garbage collection isn't in the picture, so the "myth" being addressed is stupid.
MCO wasn't a software failure, but a human failure
You're making a distinction where none exists. All software (that we know of, anyway) is the product of humans.
but ultimately the software behaved exactly as it was intended to
I somehow doubt the designers intended for the software to cause the mission to fail.
but rather that when reliability really matters, garbage collection isn't in the picture, so the "myth" being addressed is stupid
I would continue to disagree. I don't accept it on faith that the standards of fallible agencies such as NASA are proven correct, especially given the counterexample I mentioned earlier. There are ways to implement hard real-time garbage collectors and prove them correct with far more rigour than was employed in the MCO mission. The side benefit of such proofs is that we us mere mortal programmers can benefit from the work in our everyday lives, something you can't say about the extremely domain-specific code used by NASA.
So the Tacoma Narrows bridge collapse was indistinguishable from a software failure because all software is the product of humans? That makes no sense.
The MCO failure came about because people took the correct output of one program, then incorrectly used it as input for another program, which then performed exactly as it was supposed to on the bad input.
This is an interesting study in human-computer interaction and how to make that robust, but I don't see how you can possibly apply it to the question of how to make "reliable software."
Ahh, now you're affirming the consequent. I said all software is the product of humans. I did not say that all human products are software.
Where our disagreement seems to occur is where the boundaries of software systems lie. You appear to be making the claim that it is at the granularity of individual programs whereas I am claiming that the entire software system must be considered. If you were to write a bash script that pipes the output of curl (presumably an html file) to /dev/dsp0 and a horrible screeching noise emanates from your speakers, what you have produced is a software error. It does not matter that each of the individual components is working as intended; the system as a whole is not (unless you actually intended to produce that screech, of course).
I agree with your example of a pipe. But what if you manually retyped it and you were supposed to carry out a format conversion as you did so?
My understanding of the MCO failure was that it was a manual step in the process that failed. The humans were supposed to do something, and didn't. I don't see how that can be defined as software, or anything even close. It's analogous to seeing a highway sign that says the speed limit is 80km/h, setting my car cruise control to 80MPH, and then saying that it was a software defect that caused me to get a speeding ticket.
All software can be reliable of course, even if it's not embedded software. Not leaking memory at every step, avoiding double freeing of pointers or other memory-related errors tends to make software more reliable...
Why so upset if I may ask? I think Apple declared GCs "bad" anyway.
I'm upset because a leading figure of the programming community who a lot of people listen to is spouting nonsense, and lots of people are going to believe it.
Why do you think I'd care what Apple says about garbage collection...?
But it's not nonsense, realiability isn't all or nothing, and GC has the effect that it completely removes a class of problems that happen often in languages like C or C++ when doing manual memory management. It is making those programs more reliable, even if they aren't as bug-free as your typical spacecraft software.
I'm not arguing against GC or the reliability thereof. I'm merely arguing that "GC is required for reliability" is a ridiculous straw man of a myth. I am in fact a fan of GC, but I also recognize that when lives are on the line, GC and indeed dynamic memory management of any type is usually out of the picture.
The reason this isn't a straw man is that it's a true belief held by many people; most of us aren't exposed to the extremes of reliability that you're talking about. When lives aren't on the line, memory management is generally a necessity.
Who actually holds this belief, that garbage collection is a requirement for reliable software? Most people don't work with such software but surely everyone who even thinks about programming is aware that it exists. I mean, you don't have to dive deep into computing to know that cars are full of computers and software these days.
Keep reading HN. You'll see articles and comments that state (or imply) that normal humans can't handle memory management, and therefore GC is the way to avoid bugs and memory leaks.
Sorry, I can't give you references off the top of my head, but I've seen that stated, here, in the last couple of months at least.
I get this feeling too, and I wonder if it is from a large vocal group of people who have never ever developed anything in C or C++, yet appear to know all about how "dangerous" it is and enjoy mocking the languages, whilst pointing to niche languages with much smaller development tool ecosystems and support libraries.
Correction, Apple failed to implement a working GC, given the constraints on Objective-C semantics. It was a conservative GC and still it borked when mixing libraries not compiled the same way.
So they made a sensible option of having the compiler insert the retain/release method calls that Objective-C developers would need to do manually.
This only covers framework code or objects that follow Cocoa semantics, everything else is manual.
Swift, being binary compatible with the Objective-C runtime, needs to make use of the same memory model.
What comes to mind when you hear "reliable software"? Personally, I immediately think of critical embedded systems like rocket and spacecraft guidance and automotive control systems. And one thing I hear repeatedly is that many well-known coding standards for building such critical software prohibit all dynamic memory allocation, because dynamic allocation is a potential failure point.
And of course, if you don't have dynamic allocation, you don't need garbage collection, because it would have nothing to do. So if the software that needs reliability the most often doesn't use dynamic allocation, only the most ignorant could think that you need garbage collection for reliable software.