Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is no reason to think a ground-up rewrite of existing working code will lead to better and more maintainable code.

As with almost every argument in this area that depends very much on context.

A substantial rewrite might be an opportunity to use better tools and improved techniques that have been developed since the original was written. In some cases that could represent a huge improvement in things we actually care about like developer productivity and the quality and performance of the product. Importantly this doesn't imply anything was done wrong or any bad decisions were made by the developers of the original product or the people who have maintained it so far. It's just that we work in a fast-moving industry and sometimes even a few years can see some big improvements in what the available technologies can do.

A development team starting with a relatively clean slate can take into account all the knowledge and feedback received over the lifetime of the existing system. Maybe it was too expensive to make use of those insights while evolving the original system but a new version can take advantage of them. Again that can represent big gains in areas that we care about.

It's often observed that a big rewrite risks losing a lot of small improvements and fixes that have accumulated over the lifetime of the existing system and of course that's true. However it's also true that a big rewrite can avoid a lot of existing design problems or get rid of long-standing bugs that no-one was ever going to get around to fixing.

I've seen big rewrites that didn't end well. But I've also seen big rewrites that were done for sensible reasons and had very positive outcomes. And I've also seen things that should have been rewritten but weren't and instead became a drag on everything. There is no universal rule here.



A substantial rewrite is developers asking for a do-over, which is infantile behavior (or as GP more kindly put it, delusion).

This is a hill I will die upon: The people who don't deserve rewrites ask for them, early and often. The people who deserve a rewrite rarely mention them, and in fact they likely already have done it, bit by bit as a sibling comment mentioned by way of example. I will say that I've often surprised myself with features I never would have hoped for on the refactoring road. Things that would have been entire Epics become six weeks of work, sometimes less.

Refactoring is the Ship of Theseus scenario. You replace the ship bit by bit, until it's both a new ship and the same ship. Yes, it's a titanic pain in the ass, but it's also atonement for your past bad decisions. Which are heavily populated with cut corners you will be able to spot the next time.

All worthwhile learning is effortful, and greenfield is the lowest effort path to anything. You don't learn much from greenfield except why greenfield is not a panacea.


> A substantial rewrite is developers asking for a do-over, which is infantile behavior

That depends entirely on the reasons why they ask for a rewrite.

"I like this tech better", "I don't want to work with this tech", "This new tech is shinier": I agree with you, those are not solid engineering reasons.

"This doesn't interface well with the rest of the system because...", "This is going to cost us in the future because, ...", "This won't scale well because..."; I strongly disagree.

You may notice that the operative difference here is the term "because". If someone can give a quantifieable, technical, verifieable reason for a rebuild, then management should at least hear the guy out. They can still say no. But then the engineer did his job, and if the whole show goes haywire later because the 14 year old Java backend fails to scale up and the company loses money over that, nobody can say he didn't warn them.


You approach scalability problems by identifying bottlenecks through measurements, and then you redesign the problem areas to solve the problem. This might be changing an inefficient algorithm, add caching, partitioning a database or whatever the problem calls for. Most likely the majority of code will be unaffected by these changes.

If an engineer propose that the only way to solve scalability problems is to rewrite everything from the ground up, it just tell you they haven't been able to identify the root cause of the problem. The rewrite will probably end up having he same problem.


Unfortunately that argument is like saying you approach security by identifying vulnerabilities and patching them or you approach performance by profiling to find hot spots and optimising them. Of course those things are often true in specific instances and that's usually where you should start.

However all of these are systemic issues and once you've picked the low-hanging fruit you can still be left with systemic problems that are not concentrated in one place but spread throughout your code. Eventually your profiler curve is nearly flat but your JavaScript or Python code still isn't going as fast as C or Rust. Eventually you think you've patched all of your injection points but if you're using manual string concatenation to build your SQL queries you'll probably keep missing others. And eventually you run out of places to add caches and load balancers and it turns out that your existing data storage model is fundamentally limited and needs to be replaced.

In each of these cases you may end up needing to rewrite a whole section of your application or a whole service in your distributed system because you can no longer paper over the cracks. Fortunately it happens relatively rarely but it certainly does happen!


That is the calculus of a project that has been running for 1-2 years.

The calculus of a project that has been running for 10-20 years is often different.

I've explained why - and why in some cases the approach you advocate is literally impossible - in my other comments in this discussion, which I invite you to read if you haven't already.


> A substantial rewrite might be an opportunity to use better tools and improved techniques that have been developed since the original was written. In some cases that could represent a huge improvement in things we actually care about like developer productivity and the quality and performance of the product.

Can you give an example? I have a hard time imagining what kind of techniques cannot be applied to an existing code base with some adaption, but require the code to be written from scratch.


One example is adopting a safer or more productive programming language for a new version. The Rust ecosystem has now reached a point where it's a viable replacement for a lot of things we would almost certainly have used C++ for a decade ago with Rust offering significant safety and productivity benefits. Rust is also now being used to replace tools for web developers where the incumbents were written in JavaScript and in this case the advantage is order(s) of magnitude performance improvements.

Another is when your platform evolves and forces the issue like the web plugins being replaced by new web standards that we were talking about before. Here you might not need to rewrite your entire application but you probably are forced to rewrite the affected parts and possibly significantly change the software architecture around them. A related example is if you previously wrote your application targeting a specific environment and now want to support multiple environments but the operating systems or frameworks or other dependencies follow very different conventions that impose some constraints on your software design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: