Developing directly on the production database with no known backups. Saved from total disaster by pure luck. Then a bunch of happy talk about it being a "small price to pay for the lessons we gained" and how such failures "unleash true creativity". It's amazing what people will self-disclose on the internet.
That's the first thing I took away. The author ignores every sane software engineering practice, is saved by pure luck and then dives into what commands not to use in supabase. Why do this? Why not spend a week or two before you launch to setup a decent ci/cd pipeline? That's the real lesson here.
I cut my dev teeth in a financial institution so I'll concede I'm biased away from risk, but devving directly on the prod DB, not having a local enviroment to test changes against, and worse: literally no backups.. it screams wreckless, stupid, cheap, arrogant, and immature (in the tech sense). Nothing I'd like my name against publicly.
A colleague upgraded the production database for a securities financing settlement system on a Friday evening by accident 20 years ago.
We were devs with root access to production and no network segregation.
He wanted to upgrade his dev environment, but chose the wrong resource file.
He was lucky it was a Friday, because it took us the whole weekend working round the clock to get the system and the data to a consistent state by start of trading.
We called him The Dark Destroyer thereafter.
So I would add network segregation to the mix of good ideas for production ops.
Right?! This whole post is kinda absurd. It has the feel of a kid putting a fork into an outlet, getting the shock of a lifetime and then going “and thanks to this, everyone in my household now knows not to put a fork into an outlet.” You didn’t have to go through all this to figure out that you need backups. The fluff is the cherry on top
While I agree with everything said here about making backups etc. and which I have done in my career at later stage companies, when you are just starting out and building MVPs, I'd argue (as I do in the newsletter) that losing 2 weeks to setup CI/CDs pipelines and backups before you can pay the rent is a waste of time!
I was a Supabase noob back then so I had not explored their features for local development, which is the learning I try to share in this post.
I dunno. The effort needed to ensure you have backups is tiny compared to the work done to create the product. And to pull a backup before deleting stuff in production only needs a smidgen of experience.
They were extremely lucky. Imagine what the boss would have said if they hadn't managed to recover the data.
Owww. The first or second paragraph of this made me cringe
"I had just finished what I thought was a clean migration: moving our entire database from our old setup to PostgreSQL with Supabase" ... on a Friday.
Never do prod deploys on a Friday unless you have at least 2 people available through the weekend to resolve issues.
The rest of this post isn't much better.
And come one. Don't do major changes to a prod db when critical team members have signed off for a weekend or holiday.
I'm actually quite happy OP posted their experiences. But it really needs to be a learning experience. We've all done something like this and I bet a lot of us old timers have posted similar stories.
I hope the poster will learn about transactions at some point. Postgres even lets you alter the schema within a transaction.
What I learned, once upon a time, is that with a database, you shouldn't delete data you want to keep. If you want to keep something, you use SQL's fine UPDATE to update it, you don't delete it. Databases work best if you tell them to do what you want them to do, as a single transaction.
I use transactions all the time for my other projects and I've read the great Designing Data Intensive Applications which cover the topic of linearization in depth.
Only as a matter of low level storage. It won't trigger ON DELETE CASCADE and that kind of thing.
This is a kind of misunderstanding I've heard from others who were first exposed to hacky things like early mysql. Databases are something else. A different kind of beast. If you use a database, and Postgres is the best of the DBMSes, then you can say things like "a lead shouldn't be deleted before three months have passed, no matter what" or "a lead can't be deleted until its state column says it's been handled" and the DBMS will make sure of it. If you have a bug that would involve leads being deleted prematurely, the DBMS will reject your change. Your change just won't break the database.
This is such a poorly written post, and im sure there are on-going disasters waiting to happen -- I've built 3 startups and sold 2 of them and never ever developed on production. ?? What level of crazy is this?
While I don’t question the maturity model in itself (which I read after the incident and that’s why I started gitting migrations just after), I realized it was harder than other Supabase features for it to work well, especially when you start working with other features than just authentication and Postgres.
In particular, webhooks and triggers don’t work out of the box. So maybe it’s not pushing in a particular direction but at least I’d argue it’s not nudging you to do it because it entails some hours of custom setup and debugging before the CLI commands like supabase db diff actually work as intended in my experience. But I know the Supabase team is improving it every release so I’m thankful for this work!
>Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys.
The technical takeaway, as others have said, is to do prod deployment during business hours when there are people around to monitor and to help recover if anything goes wrong, and where it will be working hours for quite a while in the future. Fridays are not that.
When you are a 3 people startup, I'd argue there is no such thing as "business hours". I worked every day back then. I'll concede that the "Friday Night" part in the title might be a bit clickbait to that regard.
To be fair this was the norm 10 years ago. Just seems like he is stuck in the past. Really no excuse to provision an ec2 volume and dump all backups there. I’m not even in prod yet and have full backups to LTO to be ready for launch next month
I'm sorry, but there's "move fast and break things" and then there's a group of junior devs not even bothering to google a checklist of development or moving to production best practices.
Your Joe AI customers should be worried. Anyone actually using the RankBid you did a Show HackerNews on 8 months ago should be worried (particularly by the "Secure by design: We partner with Stripe to ensure your data is secure." line.
If you don't want to get toasted by some future failure where you won't be accidentally saved by a vendor, then maybe start learning more on the technical side instead of researching and writing blogspam like "I Read 10 Business Books So You Don't Have To".
This might sound harsh, but it's intended as sound advice that clearly nobody else is giving you.
Thanks for the feedback, I really appreciate it. Rankbid and other projects I've made, I built from scratch myself. They have strong, solid, technical foundations. Try them for yourself, even try to hack them if you want if it proves my point.
This was not the case of Joe AI. I joined later in the project, and the foundations where even weaker than what is shown in this newsletter (no API endpoint authentication whatsoever, open bar, for example) and so I had to secure and migrate everything myself when I joined them. This was what the Supabase migration was trying to accomplish. Before I joined, they didn't even have a database but I won't get into the details here.
Before Rankbid, and the other products I've built, I've worked at a B2C startup with millions of users and never caused a big outage there, I've been programming for more than ten years, and I have a double degree in computer science, and while I agree with what "should be done" in theory for production level apps, sometimes, you need to move very fast to build great startups. I've read many technical books in my life such as Designing Data Intensive Applications, High Performance Browser Networking. I know the theory, but sometimes you just don't have the time to do everything perfectly. That's what I try to expose in this blog post. I also wanted to share a humbling experience. Everyone makes mistakes, and I'm not ashamed of making some, even after years of software engineering.
My newsletter is about the intersection of programming and business. You might not find the "business" part interesting which is fine, but I think what you call blogspam has real value for engineers who have never sold before in their life and want to learn the ropes. I spend a lot of time writing each edition, because I try to respect the time of my readers as much as possible to deliver some actual insights (even if there is a bit of fluff or story telling sometimes).
And for Joe AI: it has since become much more secure, and is progressively implementing engineering best practices, so customers don't have to worry.
I dropped the production database at the first startup I worked at, three days after we went live. We were scrappy™ and didn’t have backups yet, so we lost all the data permanently. I learned that day that running automated tests on a production database isn’t a good idea!
Here is another one: Don't trust ops when they say they have backups. I asked and was told there are weekly full backups, with daily incrementals. The time came when I needed a production DB restored due to an upgrade bug in our application. That was bad - thank $DEIITY we have backups.
OPS: Huh, it appears we can't find your incremental.
ME: Well just restore the weekly, its only Tuesday.
Two Days later.
OPS:About that backup. Turns out it's a backup of the servers, not the database. We'll have to restore to new VM's in order to get at the data.
ME: How did this happen?
OPS: Well the backups work for MSSQL Server.
ME: This is PostgreSQl.
OPS: Yeah, apparently we started setting that up but never finished.
ME: You realize we have about 20 applications using that database?
OPS: Now we do.
Lesson: Until you personally have seen a successful restore from backup, you do not have backups. You have hopes and prayers that you have backups. I am forever in the Trust but Verify camp.
If your company is big enough to have dedicated ops then it should be running regular tests on backups. A disaster recovery process if you will.
At some point though its not your problem when the company is big enough. Are you gonna do everyone's job? You tell em what you need in writing and if they drop the ball its their head.
The majority of our apps were Java, running Tomcat on windows server using MSSQL or oracle. That was tested as part of DR. Our Linux servers running Python and Postgres were not as high a priority apparently.
The lack of working backups made it a problem because if assurances and certifications we were required to maintain.
When starting a new project I now request a dev database with a dump from prod more than 30 days ago just to see the process work. Does it waste their time? Maybe. In which case it encourages more automation. Do I care no? But I am not getting burned again.
It’s relative. No, I’m not sitting on the shoulder of the team that manages that (nor should I, there’d be 40 EMs bothering them!) but I fully expect my CTO has done it. And if not? Well, one day it’ll blow up and I’m looking for another job but that’s no different to any other possible major issues.
Uhh, no, the answer is not to avoid cascading deletes. The answer is to not develop directly on a production database and to have even the most basic of backup strategies in place. It is not hard.
Also, “on delete restrict” isn’t a bad policy either for some keys. Make deleting data difficult.
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
I actually agreed 100% with this learning, especially the last sentence. The younger me would write a long email to push for ON DELETE CASCADE everywhere. The older me doesn't even want to touch terraform, where an innocent looking update can end up destroying everything. I will rather live with some orphaned records and some infra drifts.
And still I got burnt few months ago, when I inadvertently triggered some internal ON DELETE CASCADE logic of Consul ACL.
Assuming storage cost is not a huge concern, I’m a big fan of soft deletes everywhere. Also leaves an easy “audit trail” to see who tried to delete something.
Of course - there are exceptions (gdpr deletion rules etc)
I tried to have a conversational, story-telling style, maybe that's why you think there are lots of "AI-isms".
But I take this as a feedback for the next editions: less fluff, more straight-to-the point writing. Thanks!
Echoing the other comments about just how bad the setup here is. Setting up staging/dev environments does not take so much time as to put you behind your competition. There's a vast, VAST chasm between "We're testing on the prod DB with no backups" and the dreaded guardrails and checkboxes.
That being said, I would love to see more resources about incident management for small teams and how to strike this balance. I'm the only developer working on a (small, but somehow super political/knives-out) company's big platform with large (F500) clients and a mandate-from-heaven to rapidly add features -- and it's by far the most stressed out I've ever been in my career if not life. Every incident, whether it be the big GCP outage from last week or a database crash this week, leads to a huge mental burden that I have no idea how to relieve, and a huge passive-aggressive political shitstorm I have no idea how to navigate.
I once remailed emails to IEEE and ACM. I was ready to quit and take the L for such a bad mistake. Not write a blog post for Friday evening consumption
This is a good story and something everyone should experience in their career even just for the lesson in humility. That said:
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
What? The point of cascading foreign keys is referential integrity. If you just leave dangling references everywhere your data will either be horribly dirty or require inconsistent manual cleanup.
As I'm sure others have said: just use a test/staging environment. It isn't hard to set up even if you are in startup mode.
Dropping DB on day 3 of your business? Probably fine. Dropping it on your day 3 but on day 300 of your business when you have paying customers? Seriously?