Many companies I have worked for operate like this. Engineers get to work on shiny new features, they get released, everyone is happy. Months later tons of bugs accumulate. The original authors are already part of another team (because “breaking silos”, but actually because “make everyone replaceable”). The engineers that inherit the project need to maintain it and fix the bugs until another team takes over.
Such is the hellscape we’ve brought on ourselves from the widespread adoption of “minimum viable product” as the right way to build things. We judge viability by some feature set, not whether the stupid thing is resilient or can be maintained.
It also doesn’t help that “minimum viable” is only one step away from “non-viable”. Every project then becomes like Icarus, testing how close to the sun we can fly before our wings melt.
But what's the alternative here? "We spent longer than was minimally viable but we still don't have a good idea if it has market fit"-product? In my experience the code usually gets binned whether the idea gets traction or not. Some companies misjudge when to rewrite, but that doesn't make the MVP part of the process wrong.
The absolute greatest wastes of talent and humanity I've ever seen in tech didn't come from tech debt, those efforts were almost always at least working on a product that people were paying money for. The biggest wastes were from over-delivering products that hadn't and were never going to succeed.
A portion of “product-market fit” failures are actually software quality failures. I think it’s easy to blame the ensh*ttification of software on corporate incompetence, but I think “minimally viable” is part of the story as well.
The world we have now where everything is built to be thrown away, including software, has had the side-effect of destroying craftsmanship. And I'm becoming more convinced as I age that the world is poorer for it.
> The world we have now where everything is built to be thrown away, including software, has had the side-effect of destroying craftsmanship. And I'm becoming more convinced as I age that the world is poorer for it.
Not every area in the software market is like this. For example, in ERP software often applications from the 90ies are still in use (maybe revamped, maybe not) by many customers. And the maintenance periods are measured in decades (typically not initially, but there always seems to be a maintenance extension).
The MVP shows if the idea would get traction. But what good is an idea that gets traction if it is unfeasible to scale, or the organisation is not willing to support it. I think this is what google does with many of its products that end up cancelled. They tested the MVP, people bought in, but the organisation already moved on, so there is no will to support and further develop it. We should be responsible and do an MVP only after deciding if the organisation would be able and interested to scale a product and support it. Otherwise, the downsides are a toxic crunch to support the product, customers are unhappy with yet another product dies, etc...
In my opinion, instead of searching for alternative we should use good programming languages that are extremely refactor-friendly and legacy-resistant.
That's why I love ELM as a front-end langauge (and hope to see a successor ROC succeed).
For back-end, that's why I love Rust, Haskell etc. All languages that are closer to Pure, FP. Because I can leave codebase to other devs and still know that it's not gonna turn into something which I've seen happening to Python, PHP and other OOP language codebases.
Nope, it's not really about the language - any mainstream language/platform can be used well or badly depending on who's guiding the design of the app (assuming somebody is, instead of falling into the "agile means no design up front" cult). You can create an elegantly structured, maintainable app in python, node, rails, .net or you can create a big ball of mud. Perhaps apps in Elm, Rust, Haskell et al have a bias towards better design because they themselves attract an elitist crowd who think more consciously about these things. If Haskell ever caught on enough to have an "eternal September" moment then the world would be littered with shitty codebases of pseudo-pure-functional code that somehow broke all the idioms that are supposed to make the language great.
I once had a client who'd had a shiny but disorganized MVP developed on .NET, and of course as the org tried to scale and ramp up new features, the devs had to fight more and more against the design (or lack of it in some parts, or overly complex over-design in other parts). At some point he met some dude at a networking event who had built a successful business on a Node codebase, and became convinced that we should rewrite from the ground up in Node because our performance problems were all the platform's fault. Wrong wrong wrong. But it's much easier to believe a sales pitch than to do the hard work of learning what good, performant design in your chosen language/framework looks like.
Who knows whether Haskellers are better, on average, at designing software. As a Haskeller I'd like to think they are! But I really have no idea. What I do know is that _I_ can design better software in Haskell than in Python, and I put that down to qualities of the language.
Help, I've seen bad elixir. Ecto queries spanning dozens of tables with the performance grinding to a halt as load increased, with models and logic so intertwined that you had little hope of untangling the ball of string so you could refactor and scale the database.
You can make bad things in any language. I like Go for the same reasons you like FP. And there too, people can do strange and unmaintainable things
If it didn't get traction someone defined badly what was the MVP. As it wasn't viable. The M gives the idea of a minimal feature set or low effort but it shouldn't, a MVP now that competition is high needs much more quality and features than a decade ago
I don’t think MVPs are the problem here. Most projects were like this before MVPs were conceived of. Usually the failure is that the project isn’t minimal at all. It’s usually the maximum complexity that a given team can handle.
I don't blame mvp at all. At $prev_gig, we did agile and mvps and all that. BUT we also focused on ownership. You build it, you support it. Forever, or until officially handed off and accepted by another team. Everything has an owner.
You can still get teams who are scared of what they own and refused to update it, but that is a whole lot of "and so? Do it" when you have a healthy product backlog that is single stack priority and the higher priority thing requires output from them (while a pain, it didn't happen _that_ often).
To be fair, $prev_gig was a very well ran company in a lot of ways and better than any other shop I have been in.
What I'd like to see managing expectations towards the new product well before anything started, considering resource (time, money, workforce, etc.) limitations.
And not going for the shiny things do not fit!
Because limitations are everywhere. Ok, software engineers like to pride themselves in the ability to do anything and everything (not as much as salesguys of course) but this stems from the ninja attitude towards technology and forgetting other dimensions of reality. Should only attempt managable goals on all necessary levels.
I am still waiting for place where this is achieved...
I often joke that every startup job post <<getting lots of funding>> is you playing <<game about cleaning up gore>> for the <<people who made the MVP>> of the <<before getting lots of funding>> stage.
having been the original author on a company defining feature and then told that the silo must be broken only to see my work stepped on for years to come i wholeheartedly agree. the inheritors not qualified to make the decisions, my grand ideas pushed to the side, and having watched the incompetance in managing said feature has been a hard thing to overcome and im still salty about it every time a stupid bug arises. especially when warnings were raised with ample time to adjust. but i learned an important lesson and i can say with certaintly that i wont hesitate to be perceived as an asshole and die on hills about it the next time
completely correct, but I dont think thats where OP is coming from or what the article intends to suggest either. Its recommending that you try multiple things, get a feel for whats technically feasible & if it looks interesting to the customer and push that forward. Its very well applicable to indie devs & also applicable to large companies to some extent. This philosophy is great to identify the feature/product you want to spend meaningful time on.
In fact, one could wager that the situation you described is directly a consequence of not adhering to what OP is suggesting.
"Make Everyone Replaceable," by the same people who brought you "Return to the Office." Sometimes it's code for "Prevent Competition From Rising Through the Technical Ranks."
It's one of those interesting and infuriating things. There's a category of PHB managers who can't code, can't design, can't inspire, can't really do marketing well at all. They're marginally more skilled and way less funny than Michael Scott.
Their advantage is being unencumbered by knowledge. They don't suffer the technical decision-making process. They don't try to compete on the value-added charts. By not really caring, they sail through dotting the i's and just forget about the t's. The product ships with a hundred blemishes and two major flaws. It was rushed out the door by the asshole with the padded resume and the HR surfing history.
They abused the flaws in the system to subvert the meritocratic outcome. From their perspective, they did what they had to do to "win."
I drew a comparison between “iterate and fail fast” vs “lots of upfront design” as a personal process, rather than a company’s modus operandi. For instance someone might do 2 or 3 prototypes when tasked with delivering a certain feature in order to explore the problem space.
I worked at a Canadian bank that operated like this. They had one "rock star" UI/UX developer that was deployed to build every new customer-facing initiative, but they moved them to the next project as soon as the current one made it to the testing phase.
The residual team was left to clean up their code and make it work, which often involved significant rework.
I raised my concerns with management that they were cheating their "rock star" dev out of valuable learning experiences and perpetuating the problem - the dev made the same errors on each new project. Not to mention, perpetuating their testing nightmares.
One of the things I like about my current project is that more than half of the original team have stuck with it after we went live. The work itself is, admittedly, a bit less interesting, but it’s rewarding in a different way.
> The original authors are already part of another team (because “breaking silos”, but actually because “make everyone replaceable”)
I've been there and i've inherited some stuff to manage, and it's painful.
But there's another side of this, which is even more painful and definitely infuriating: some people get to stay on the same project for years, and they become "key people" just because they've either A) stumbled on the issues before or B) have introduced the issues themselves.
This is completely infuriating because now you have to play cat and mouse with these people to get help, and they get to play the "super busy, everybody ask me stuff" role because they're the only people with that historic knowledge.
Needless to say, they can also back the claim they deserve a promotion (and a salary increase) by executing on.this playbook. And they usually do.
I've seen this thing happen in pretty much all company sizes (200 people, 1000 people, 500k+ people).
At this point, after almost ten years in the industry, I'm starting to think this is the winning playbook for the meta-game: go rogue in a maliciously-compliant way, artificially claim superiority over your peers, get promoted.
It’s awful.