Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, I've made over 15 games in my career and safety with C++ really just isn't an issue with decent developers. The line between what's a programmer for games and what's a designer is narrowing, most designers are competent programmers. Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.


I guess, then, that you weren't part of the Battlefield 4 team [1]. I've discussed the issue of the "no decent programmer" fallacy in the past; yes, in theory if programmers were careful and alert, they could create flawless software, yet this never happens in practice because humans are prone to errors (i.e. not understanding a subtlety of the language or library, thinking that a validation is done at a different level of abstraction, failing to imagine what could be an error scenario and how it could occur, etc.). Languages like Rust offer the same capabilities as C or C++, while eliminating entire classes of bug sources.

[1] http://en.wikipedia.org/wiki/Battlefield_4#Technical_issues_...


If BF4 was written in Csharp, or java (or rust or go?? I'm sure it would still have just as many bugs. One of peoples biggest complaint is the kill shots that you don't see, but that's a design choice (client side hit detection).


Of course, it's impossible to speculate without seeing the codebase, but considering that Rust makes several classes of C++ bugs impossible at compile time, I'd be hard-pressed to imagine that a Rust version wouldn't be less buggy.


If the safer type system gives the devs an unwarranted sense of security, they might write less tests, be less careful in their design, or wait longer between audits and other sanity checks.

If on the other hand the devs understand which classes of bugs aren't ruled out in Rust, then sure, you will end up with fewer bugs.


Rust's type system eliminates the need to test for whole classes of bugs, because they are statically checked for at compile time. This means that tests can be more focused on logic errors rather than standard book keeping. If you look at the example set by the rust repository itself (https://github.com/mozilla/rust/), it is heavily tested and every single PR (https://github.com/mozilla/rust/pulls) is reviewed before merging. This discipline definitely filters down into third party libraries.


Less prone to certain kinds of bugs, sure. But logical errors, not necessarily.


> Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.

The security track record of applications written in C++ disagrees with you.


We are talking about new engines written in these languages, though, not multi-decade old codebases still using inline assembler, goto, and pointer arithmetic.

Modern C++ is really safe if you use the subset that involves automatic storage duration, well bounded arrays, etc and use all the warning flags of your compiler, run static analysis, have a robust test framework, etc.


No, modern C++ is not even close to memory safe. This is my favorite meme to destroy over and over on HN. :)

Consider iterator invalidation, null pointer dereference (which is undefined behavior, not a segfault -- and you can't get away from pointers because of "this" and move semantics), dangling references, destruction of the unique owner of the "this" pointer, use after move, etc. etc.


Extraordinary claim- please elaborate. I'm working on 100's of thousands of lines of C++ code with a medium-sized team; memory issues are almost non-existent because of disciplines described above.


I've described this many times in the past, but here are a few things that modern C++ does nothing to protect against:

* Iterator invalidation: if you destroy the contents of a container that you're iterating over, undefined behavior. This has resulted in actual security bugs in Firefox.

    std::vector v;
    v.push_back(MyObject);
    for (auto x : v) {
        v.clear();
        x->whatever(); // UB
    }
* "this" pointer invalidation: if you call a method on an object that is a unique_ptr or shared_ptr holds the only reference to, there are ways for the object to cause the smart pointer holding onto it to let go of it, causing the "this" pointer to go dangling. The simplest way is to have the object be stored in a global variable and to have the method overwrite the contents of that global. std::enable_shared_from_this can fix it, but only if you use it everywhere and use shared_ptr for all your objects that you plan to call methods on. (Nobody does this in practice because the overhead, both syntactic and at runtime, is far too high, and it doesn't help for the STL classes, which don't do this.)

    class Foo;

    unique_ptr<Foo> inst;

    class Foo {
    public:
        virtual void f();
        void kaboom() {
            inst = NULL;
            f(); // UB if this == inst
        }
    };
* Dangling references: similar to the above, but with arbitrary references. (To see this, refactor the code above into a static method with an explicit reference parameter: observe that the problem remains.) No references in C++ are actually safe.

* Use after move: obvious. Undefined behavior.

* Null pointer dereference: contrary to popular belief, null pointer dereference is undefined behavior, not a segfault. This means that the compiler is free to, for example, make you fall off the end of the function if you dereference a null pointer. In practice compilers don't do this, because people dereference null pointers all the time, but they do assume that pointers that have been successfully dereferenced once cannot be null and remove those null checks. The latter optimization has caused at least one vulnerability in the Linux kernel.

Why does use after free matter? See the page here: https://www.owasp.org/index.php/Using_freed_memory

In particular, note this: "If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved." This happens a lot—not all use-after-free is exploitable, of course, but it happened often enough that all browsers had to start hacking in special allocators to try to reduce the possibility of exploitation of use-after-frees (search for "frame poisoning").

Obligatory disclaimer: these are small code samples. Of course nobody would write exactly these code examples in practice. But we do see these issues in practice a lot when the programs get big and the call chains get deep and suddenly you discover that it's possible to call function foo() in one module from function bar() in another module and foo() stomps all over the container that bar() was iterating over. At this point claiming that C++ is memory safe is the extraordinary claim; C++ is neither memory safe in theory (as these examples show) nor in practice (as the litany of memory safety problems in C++ apps shows).


A lot of this just looks to be lacking const correctness. If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues.

I think it is a valid criticism of the language that not all non-primitive types aren't implicitly const, though. But you could never implement that without colossal backwards compatibility breakage. Which I guess is fine, since you could just keep a code base an std= behind until you fixed it.

> Use after move: obvious. Undefined behavior.

This I don't have an answer to though. I've always disliked how this isn't a compiler error.


You can return out references and still get dangling pointers with const values. For example, you can return an iterator outside the scope it lives in and dereference that iterator for undefined behavior (use-after-free, possibly exploitable as above).

Besides, isn't "C++ is memory safe if you don't use mutation" (even if it were true—which it isn't) an extremely uninteresting statement? That's a very crippled subset of the language.


> If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues

Mutability in Rust is perfectly safe because of the static checks built into the type system – the compiler will catch you if you screw things up.

> you could never implement that without colossal backwards compatibility breakage

I cannot express how important immutability as default is. This prevents the issues that C++ has with folks forgetting to mark things as const. There is also lint that warns when locals are unnecessarily marked as mutable, which can catch some logic errors (I say that from experience).

Also note that I said 'immutability' not 'const'. Immutability is a far stronger invariant than const, and therefore is much safer. It could also lead to better compile-time optimisations in the future. I'm sure you know this, but just in case:

- const: you can't mutate it, but others possibly can - immutable: nobody can mutate it


Right; stl sucks. So its work, but you can make ref-safe containers, even thread-safe ones. We do that; we do audio rendering with audio-chain editing on the fly, with no memory issues. It takes care, more care than other languages. But its far from unsolvable.


And the philosophy of Rust is, what if we encoded that "care" into the language itself? That, to me, is a clear win. It is, to me, good systems language design: codifying decades of hard earned "best practices" into the language semantics itself.


Of course it's possible to write correct C++ code, just like it's possible to write correct assembly code. The point is the extra care required: every piece of code needs to be very carefully authored to ensure it's correct, to avoid the myriad pitfalls.


Or you can just trust the language. And if its not right, or not the way you plan to use it, what then? You're stuck unless the language also permits you to roll your own.


Rust does allow you to implement low-level things in itself, by giving an escape hatch into C/C++-like unsafe code (i.e. risk-of-incorrectness is purely opt-in, rather than always-there).

Examples of things efficiently implemented entirely in the standard library in pure Rust (well, with some calls into the operating system/libc): Vec, the std::vector equivalent. Rc, reference counted pointers (statically restricted to a single thread). Arc, thread-safe reference counted pointers. Mutex. Concurrent queues. Hashmap.


Use after move by itself is not undefined behaviour.


It is for most of the important types that people move; e.g. unique_ptr (results in null dereference).


You should write something up with actual code samples. I would also love to actually demonstrate this to people.


I agree with Steve here - in this instance a catalogue of code examples would be much a great deal compelling than natural language explanations.


They're talking about games in C++, not random C++ apps. Oranges to apples.


Selection bias much? The 500,000 C++ applications that have never blown up in somebody's face aren't discussed on Hacker News.


Maybe most C++ applications are low-value as attack targets, so no-one has bothered to find all the corner cases that make them blow up.

The fact that applications like browsers and operating systems (which are known to be high value targets) have a lot of effort & resources put into security but still have attack vectors makes the "C++ is secure" position fairly indefensible.


pcwalton mainly works on web browser development (Servo), which whilst sharing some goals with game development, also differs in some respects. Although online security is more and more important in games these days, the real appeal of Rust in respect to game development is in providing an alternative to the 'death by a thousand cuts' that can plague large C++ projects.

I've posted a list of the things I consider the most relevant to game development: https://news.ycombinator.com/item?id=7587413 Any one or two of them alone wouldn't really be a compelling enough reason to switch, but put together they form a very compelling value proposition.


It seems like it's hard to say if Rust really would have eliminated those bugs, the reports are vague and the ones that aren't, e.g. fixed framerate issues, would be an issue either way. My argument isn't solely get good developers and be done with it. It's a combination of things, and one of the most important things is getting good practices in place. I don't know if EA did this but having a good auto-test system in place probably would have caught those crash bugs and prevented the server issue, for example.


> one of the most important things is getting good practices in place

That is really important, but still, wouldn't it be better if you could encode at least some of those good practices into the language itself, rather than relying on humans to be constantly on their game? I'm certainly not perfect, so I would rather my sloppiness be caught earlier rather than having it come back to bite me in the future. See: http://thecodelesscode.com/case/116




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: