Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> My priorities are reasonable performances and the fastest iteration time possible.

I bought Mount & Blade II Bannerlord in 2020-03-30. I love it to death, but come on...

  // 2024-02-01
  $ curl https://www.taleworlds.com/en/News/552 | grep "Fixed a crash that" | wc -l
  29

  // 2023-12-21
  $ curl https://www.taleworlds.com/en/News/549 | grep "Fixed a crash that" | wc -l
  6

  // 2023-12-14
  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a crash that" | wc -l
  101
Maybe feeling like you're iterating fast isn't the same as getting to the destination faster.

Edit: Lol guys calm down with the down-vote party. I was counting crashes, not bugs:

  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a bug that" | wc -l
  308
Does your C++ not crash, just theirs?


That game (currently) has 88% positive reviews on steam and a 77 metacritic score with over 15.5k people playing the game right now (according to steamcharts.com)

Thats a lot of happy customers.


I can't really comment on the quality of the game or experience or how buggy it feels because I've never played it, but I will say that counting fixed crash situations is a somewhat arbitrary and useless metric. If each of those crashes affected and was reported by a single person or even nobody because no regular person could really encounter it is a vastly different situation than if each of those crashes was experienced by even 1% of the users.

The criteria by which something is decided to mention in the patch notes is not always purely because the users care. Sometimes it's because the developers want to signal effort to user and/or upper management.

Maybe Mount and Blade was super boggy in the past and is still super buggy now so all the crashes fixed are just an indicator of how large the problem is for them and how bad the code still is. I dunno, you didn't really give any information to help on that front.


Mount & Blade 2 was released very early and despite constant improvement (they keep patching it at a strong pace), it's only slowly evolving.

It was even downright unfinished on release, with many game systems claiming to be doing something actually being simply unimplemented.

But despite all that it was and is still fairly playable and enjoyable, even at release. A game only needs a great core gameplay loop to succeed, even if large parts of it are completely broken.

Interestingly, Taleworlds make their own engine with fairly unique capabilities. 200 players can fight in fast paced, precise melee combat on a single server. Even more than in fast-paced shooters, it can be extremely frustrating for players when the game doesn't behave in exactly the way that you would expect (for example, standing undefended just a few centimeters away from the reach of an opponent's swing, or relying on interrupting their attack with your own landing 100 milliseconds before). They've made their own scripting language for everything related to policy. So this scripting language is what modders interact with. And it is absolutely atrocious as a language, but it serves the purpose well enough.


> If each of those crashes affected and was reported by a single person or even nobody

Then do you really think they'd be spending time fixing it?

(Actually, you know what, they probably would.)


That's why I had a paragraph mentioning different reasons things might be mentioned. I don't think it's uncommon to find a bug that could cause a crash while working something else, confirm it does crash, and then fix it. If the culture is to mention those things in patch notes even if you're not sure it actually ever caused a user problem, then it will be listed.

That doesn't mean all, or even any, of the listed crashes were like that, but it does illustrate that it's hard to know what they actually mean without additional info.

(for what it's worth, I'm a long time Tarkov player, so I'm definitely familiar wroth buggy games and apparent development problems with rushing, so this is more a devils advocate position on my part)


With Rust and the exact time iteration times, management and deadlines, you end up with the same amount, just theyre panic!() instead. Thats an improvement, sure, but its fighting a symptom.


There are a bunch of useful clippy lints to completely disable most forms of panicking in CI. We use this at my work since a single panic could cost millions of $ in our case.


With modern languages that take safety more seriously, it's a lot easier to spot places where the code 'goes wrong'.

In an older language, you have nothing to tell you whether you're about to dereference null:

   foo.bar.baz = ...;
Even if you've coded it 100% correctly, that line of code still looks the same as code which will segfault. You need to look elsewhere in codebase to make sure the right instructions populated those fields at the right time. If I'm scrolling past, I'll slow down everytime to think "Hey, will that crash?"

Compare that with more safety focused languages where you can see the null-dereferences on the page. Unwrap() or whatever it is in Rust. Since they're visually present, you can code fast by using the unsafe variants, come back later, and know that they won't be missed in a code review. You can literally grep for unsafe code to refactor.


I love Rust, but a crashing released game is better than a half-finished "perfect" game, or a game where you couldn't iterate quickly, and ended up with a perfectly tuned, unfun game.


> a crashing released game is better than a half-finished "perfect" game

For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.


> For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.

Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

I'm playing a 2020 game right now that has (in about 30 hours of gameplay):

1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Since this game is now so old it's not getting any more patches, these bugs are there for all eternity, because they just do not move the needle on enjoyment by the gamer.

Searching forums for Far Cry 5 Bugs gives results like this: https://www.reddit.com/r/farcry/comments/1ai4jzx/has_far_cry...

Gamers just don't care about bugs unless it stops them playing the game at all!

In order for bugs to have an effect on gamer enjoyment, it literally needs to make the game unplayable, and not just make the player reload from the last savepoint.


> Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

Evidence suggests otherwise. Of all demographics, game studios appear to be the most tolerant of buggy software. bladeblablabla

Just go look at CP2077 or BF2042 or Fallout 76 or ...

So many games out there that no one wanted to play until they finally actually made a game that was ready for release, a year or more after they released it.


> 1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Sounds about on par even for enterprise software, in cases where shipping quickly is prioritized over overall quality, doubly so for gamedev which is notorious for long hours and scope creep.


The problem is we would have a lot less games and the games we would get would not be as fun. Rust appears to have the following problems:

1) As the article pointed out, game developers are less productive in Rust. This is a huge problem.

2) Game budgets are not going to get bigger. This means that if Rust reduces productivity, games are going to be less polished, less fun, etc. if they are written in Rust.

3) Game quality is already fine. 99% of the games I play have very few noticeable bugs (I play on an Xbox Series X). Even the games with bugs are still fun.

Basically, gamers are looking for fun games which work well. They are not looking for perfect software which has no bugs.


> As the article pointed out, game developers are less productive in Rust. This is a huge problem.

I don't think it's limited to just game developers though. Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.


Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything". If Rust were really that astoundingly unproductive of a language, then so many developers at organizations big and small wouldn't be using it. Our industry may be irrational at times, but it's not that irrational.


> Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything".

I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

It depends what you measure

For software that must get it right Rust can be more productive. The early cycles of development are slow, especially for people who have not surrendered to the borrow checker, yet. But the lack of simple mistakes, or more accurately the compiler's early detection of simple mistakes dramatically speeds up development

But in a lot of software those mistakes, whilst important, will not "crash the aeroplane ", so it is not worth that extra cost in the early cycles

I am not a game developer, or player, but games are in that category I think


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

> I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.

Depending on the domain, I am disputing that, because of things like the Cargo ecosystem, easy parallelism, ease of interop with native code, etc. There is no equivalent to wgpu in other languages, for example.


> That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

I feel that you're selectively reading only what you have talking points to respond to.

Here is exactly what I said:

> Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.

I mean, I literally carved out an exception use-case for Rust; viz for software that can't handle GC.

I wrote a single sentence with a single point, not a a single point diluted over multiple paragraphs. You have to literally read only half-sentences to interpret my point the way you did.

If you aren't going to even bother reading full sentences, why bother engaging at all?


Would "you're going to be less productive in Rust than nearly any other language unless GC time or any bug are dealbreakers" be a fair summary of what you mean?

Either way, I fully disagree with that. Many more traits of Rust may make it a better choice even if the low productivity claim was true:

- integration with other languages - I know of companies successfully developing a single Rust library and just using thin wrappers for other languages they need support for

- data races detected at compile time - in highly concurrent applications being able to catch data races at compile time is huge. Please take a look at a blog post from the Uber team[1]. A dedicated team investigated 1100 data race occurrences. Data races may lead to bugs that are a PR nightmare for companies, like a bug in GitHub that sometimes resulted in a user being logged in to an account of another user[2].

- Embedded systems

- WASM - there are not that many languages that natively compile to WASM and have good tooling around it. For most GCed languages you have to go for "close enough" alternatives like TinyGo or AssemblyScript or use tools that bundle an entire interpreter in a WASM binary

But even outside these categories, I don't think it's universally true Rust is less productive than alternatives and my experience shows me otherwise. For example, in many domains, you don't care about the borrow checker and lifetimes almost at all. Take a look at a Todo Backend[3] I wrote in Rust[4]. If you take a look at one of the Go implementations of the same thing, you wouldn't probably see much of a difference because of the nature of web backends: you get some data in, you process the data, usually making some database queries, you return some data (or not).

What with stateful applications without a database, though? Surely that must be hell? Even here it's not as black and white as you would like to see it. When I was working at Hopin (once upon a time a unicorn startup scaling extremely fast) we had to implement a presence server - a service holding information on who is online and what event they're attending, which video they're watching etc. Nothing too complex, but we had a requirement to hold up to 100k open connections, and at the time we didn't have any infrastructure for that (most of the stack was Node.js and Rails). Someone wrote a proof of concept in Go using Redis as a backend with a queue and using Redis for leader election with a big caveat - each of the nodes had to process all of the queue items, so we were limited by a size and processing speed of a single Redis node.

When the time came to implement the production version I said: let's treat the application as a database. We cared only about current data. If the application failed, we could restart and clients would reconnect. If we wanted to have a history of presence we could push all of the events to Kafka or another queue, but still mostly use in-memory data for real-time needs.

I had some Rust exposure before, but it was my first production app. I was also joined by a person who had never written Rust before. In two weeks we had a working application while I was also making sure the other programmer codes as much as possible and doing a lot of pair programming. We deployed it shortly after. Then we added a few more features in the next two weeks or so.

The code was extremely simple - more or less a few hashes behind a WebSocket based API. As all of the data was living through the entire lifetime of the application we didn't have to care about borrow checker or lifetimes. We had an actore-like code - a few threads with each thread holding a data structure and a few channels that send commands. We were moved to other projects, so the presence server became unmaintained and even then it was working without any issues whatsoever for the next half a year or so. Then there was a big push to scale all of the services to handle a minimum 500k concurrent users, ideally a million. The Rust app didn't need almost any changes, after some kernel and load balancer tune up, it could handle up to 2 million connections frequently sending events on a single machine. If we wanted to, we could easily shard it, but there was no need.

The push to go more into real-time features was deprioritized by then, though, so the management said the app has to be rewritten to Node.js. There was one try to do that, which failed after two months or so. This is not to say you can't make an application like that in Node.js. You can, but you can't use the same architecture, cause you can't multithread Node.js applications, thus you have to run multiple processes, thus you have to have some kind of a database or a queue or a service you use (at the time they tried using one of the Pusher-like services, cause they didn't want to handle WebSocket connections themselves).

But even outside of specific examples like that - in my experience, I don't feel less productive in Rust when it comes to writing production-level applications, not necessarily critical or with wild performance needs. It's subjective, of course, but I agree with @pcwalton - if Rust was universally not productive, I don't believe so many companies would be using it.

One last thing to consider is the expressiveness of the language. In many languages, like Go, it's hard to make certain abstractions that are not a burden to use. Even after they introduced generics, most of the ecosystem is still using `interface {}` all over the place and projects like Kubernetes implement their own dynamic runtime type system. Recently I've been working on a load-testing tool running scenarios as WASM binaries called Crows[5] and one of the abstractions I've created is an RPC client that can send requests in both directions. At the code level, you use it like many RPC libraries in higher-level languages. You defined your interface [6] and then you can call it like it was a regular local method[7] which is huge when developing code, especially in an editor with LSP, cause it will show you what methods you can call and what arguments they take. What's more any typo would be caught at compile time as the server and the client share the same interface. In Go even official RPC client is like `client.Call("TimeServer.GiveServerTime", args, &reply)`, which can't be type checked as far as I know. I think the ability to create these kinds of APIs that are preventing you from doing the wrong thing is a huge advantage of the language.

  1. https://www.uber.com/en-DE/blog/data-race-patterns-in-go/
  2. https://github.blog/2021-03-08-github-security-update-a-bug-related-to-handling-of-authenticated-sessions/
  3. https://todobackend.com/
  4. https://github.com/drogus/todo-backend/blob/main/src/main.rs#L138-L151
  5. https://github.com/drogus/crows
  6. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/utils/src/services/mod.rs#L94-L105
  7. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/coordinator/src/main.rs#L80


Have you written much Rust?


Uhh, no, the games we got 15 years ago and before were definitely just as fun.


Hell no. Lots of these games take 5-7 years to make. You want to turn that into 10-14? I can live with the rare crash bugs.


What if it's 5-7, but only after there is a deep enough dev pool and language tooling to address some of the productivity issues mentioned in the blog? Why make up arbitrary x2 factors?


IDK, seems to me like studios did just fine putting release-quality games out at release 15-20 years ago shrug

"rare" LOL


No, the game doesn’t take twice as long. It just gets abandoned half-finished.

The world is full of half-finished games, it takes time and money to push to a finish.


Ah right that's why no games existed two decades ago.


It's a chicken-egg problem. You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all). YMMV depending on the complexity of the game of course.

But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users. And for the affected users it doesn't matter whether that crash is caused by crappy game code, or some crappy 3rd party software interfering with your game. For the user it's always the game's fault ;)


> You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all).

Just because they like to say that doesn't mean it's true. I've had access to see the list of known issues considered "critical" around release time for a few games. They know the bug exists, they just want to release it more than they want to fix it.

> But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users.

Which in no way invalidates the point that most modern games are absolutely unplayable for most users at release.

Oh yeah, and also that's why beta testing exists


perfect is the enemy of good. You never release anything thats perfect.

Perfect is impossible.


> "perfect"

> perfect

See the difference?


> I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long

I have recently completed Cyberpunk Phantom Liberty. The game crashed 4-5 times during 100-150 hours of gameplay. The crashes were pretty much painless because I quick save often.

The game was amazing.

The development of the game started in 2012, 12 years ago. I’m not sure you or most gamers would rather want a fully-finished "perfect" Cyberpunk 2077 game released in 2036.


> 4-5 times during 100-150 hours of gameplay

Great, thanks for proving my point! If you had played CP at release, how many times would it have crashed?

Do you really think it would have taken them another 12 years to get to the point they're at now if they hadn't released it 4 years ago? SMH


Photoshop does crash. Trust me if you do enough image editing you'll know it's not even a super rare event. They're generally doing a poor job handling the situations where you have no enough storage or RAM.

It didn't stop Adobe from being worth 200B.


Hard to know what TaleWorlds are actually optimising for because half the features of Bannerlord feel like they’ve never been played by a dev let alone iterated on.


How many of those crashes were caused by memory safety issues though?

A lot of those crashes might simply be called a "panic" in Rust.


And yet the fact that Bannerlord game logic is entirely in C# makes this possible:

https://github.com/int19h/Bannerlord.CSharp.Scripting

which in turn makes it a lot easier and more convenient to mod. Try that with Rust...


Yeah this is a common problem in the industry, we rarely have enough time to refactor what should be considered prototype-level code into robust code.


The game dev industry could form a consortium to launch its own dedicated general purpose language built from scratch to compile very fast like V or Go, run predictability, be much safer, be more reusable, and be extremely productive with the lessons learned from C, C++, C#, and more.

Also, I think LLMs will be able to run against code bases to suggest mass codemods to clean things up rather than having humans make a zillion changes or refactoring fragile areas of tech debt. LLMs are already being applied to generate test cases.


Jonathan Blow’s Jai is an attempt at something like this. It’s looking promising so far!


Interesting. I went through the primer spec. Appears to be a different kind of D or Go with some key points. Any new language should begin with a specific thesis of specific competitive advantages and problems it solves over existing customary and alternative tools. Kai appears to fulfill this property, so that's a good sign.


> It is still in development and as of yet is unavailable to the general public.

Is it still the case?


C# is that language (see Godot, Stride, FNA, Monogame).


Not really, it was adopted. It originated from Microsoft as their post-J++ Java alternative for CLR for the purposes of making it easier to write banking server software and Windows apps.


Does it matter what it was 20 years ago? It is the go-to language for gamedev today and only keeps getting better at it.


Both things can be true. I'm saying it wasn't designed to be as such. I don't what you're arguing about.


I believe that better tooling can help, yes. With refactoring, debugging, creating performance and style reports, updating documentation and a ton of other stuff.


This comment is nonsense


My impression is that this is due to their non-robust programming style. They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.


> My impression is that this is due to their non-robust programming style.

It's been 50+ years. I don't think that it's worthwhile just telling the programmer to do a better job.

> They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.

This is a pretty big feedback loop:

  * The programmer puts the null into the code
  * The code is released
  * The right conditions occur and the player triggers it
  * IF DONE SKILLFULLY AND CORRECTLY the game is able to recover from the null-dereference, write it out to a log, and get that log back to the developers.
  * The programmer takes the null out of the code.
If you don't do the first step, you don't get stuck doing the others either.


50+ years and people still fail to grasp this.

You have to put something (an optional, or a default constructed object in a useless state) and all you did was to skip the null check. In case of optional, you introduced a stack rewind or a panic. Everything else stayed the same. Maybe that default even deleted the hard drive instead of crashing.

Coding is hard. "just don't code" is not the answer. You can avoid something, that doesn't mean it won't show up in some other fashion.


Again, if you disallow unwrapping and panicking at the CI level, you actually force your developers to properly handle these situations.


> You have to put something (an optional, or a default constructed object in a useless state)

No, you really don't. There is no default number, no default string, no default piece of legislation, no default function.


Arbitrary recovery to null pointers isn't a good way to do robust programming. I recommend doing the exact opposite actually.

https://en.wikipedia.org/wiki/Crash-only_software

https://medium.com/@vamsimokari/erlang-let-it-crash-philosop...


A crash of an actor in BEAM is incomparable to a crash of a video game.


Is it? Is there no reasonable case where you have a subsystem in a game crash, then restart itself? Unless I'm mistaken, I've experienced this myself in video games more than once. Anything beats a full crash with a pointless error message.


I feel like a lot of people of HN think making a game is like making a web service or a GUI application. Yes, this behavior is used in video games sometimes, "restart itself" often means reloading a save file or something similar.


But if your video game uses a DSL for actors then you can do it in the DSL, which avoids special arbitrary bug-hiding behavior.


I dare you to board a plane whose software was written that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: