Author here. Wrote a bit about how we use Terraform for local dev at work. In brief:
Our Docker Compose local dev setup started to break down once we had to model more complicated production behavior locally – things like table-specific Postgres roles for audit logs and dynamically provisioned databases per-clinical-trial. We were drifting toward a bespoke Bash mess to keep dev and prod in sync.
Our core idea was instead to embrace Terraform in the local dev environment too. We were already using Terraform heavily in prod, and Terraform's robust provider ecosystem meant that we could e.g. substitute Docker containers for RDS and MinIO for S3 without deviating too far from our production configuration.
The really fun part is how we use Terraform to handle dynamic provisioning, which we need for isolated, per-clinical-trial databases. The way we do it in prod is by giving each clinical trial its own, isolated Terraform state, stored in a cloud storage bucket. By writing an equivalent local Terraform config for this provisioning step, we enable the app to run the same `terraform apply` command as it does in prod to locally to spin up a new database, with the individual state for that new db stored in a local MinIO bucket... which is itself created by the original `terraform apply` that sets up the initial local dev infrastructure.
Altogether, Terraform gives us a super high-fidelity local environment that lets us test complex application behavior and infrastructure logic without the full overhead of spinning up a local k8s cluster (which is what I imagine the next best alternative might be?). It's readable, declarative, and required no new tooling on our side since we were already using Docker and Terraform anyways.
Curious to hear how other folks are managing complex local dev setups, especially if you're not on Kubernetes!
You're right that IDE/dev-time performance might be slower than using generated types since this relies on "dynamic" TypeScript inference rather than static codegen'd types.
That said, depending on how your codegen works and how you're using protos at runtime, this approach might actually be faster at runtime. Types are stripped at compile-time and there’s no generated class or constructor logic — in the compiled output, you're left with plain JS objects which potentially avoids the serialization or class overhead that some proto codegen tools introduce.
(FWIW, type inference in VSCode seemed reasonably fast with the toy examples I was playing with)
Typescript never generates classes or constructors that aren't present in source code. Whether or not constructors are present is completely independent from whether you're using code gen.
It's a fun route! If you give it a shot and have a bike that fits you good you'd be amazed at how quickly you can build up fitness for cycling, heavy or not.
I interned at a FAANG last summer (Google) and I was blown away at how good the in-house tooling was (including the in-house IDE).
Everything was seamless - I'd log in, tap my security key, and instantly have access to pretty much the entire monorepo and the rest of the production + deploy systems. All of the internal tools integrated with each other -- for instance, I could create a CL (i.e. a pull request) and then fix issues raised by the CI system, entirely from the IDE.
"Owning the stack" completely in-house also extended to hardware -- all of my builds happened inside Google datacenters too (not on my machine), and the development box they provided me was a Chromebook.
Nice to see a comment like this, I feel the same way. Google is the exception to all these other comments. The tools available to Google engineers make working outside Google feel painful and inefficient by comparison even with their unfettered access to all the third-party tooling.
It’s so good that it would actually be a factor if I were ever considering working somewhere else. Getting to work with Google’s in-house dev tooling is probably worth 15-20k to me. A lot more than 20k if the other company has a reputation for horrible tooling.
This is in contrast to Apple's internal software systems, at least if recent reports seem to be accurate; aside from (I assume Xcode), it seems the business-ey enterprise stuff is outsourced to contractors and software consulting firms and is a giant mess...
Granted, their DNA is still in hardware/devices/OS level stuff versus "services" despite all they claim to be, but how much more quickly could they move if they "owned more" of their stack?
Sun didn't have a monorepo -- at least not 1998 through the acquisition. Each org of a given size (Solaris os/net, install, etc) had its own separate island of source with its own build and test systems.
I guess my comment wasn’t clear. Talking about how you could just grab a random terminal with your smart card at Sun, and get work done. True at Google, true at Sun back in the day, not true at most places.
Ah, I remember when the in-house IDE first came out, and it was considered a toy compared to vim/emacs. It has come a long way since then.
That said, I'm also looking forward to "VSCode front-end in the browser" becoming ubiquitous. I remember using Visual Studio back in my first job in high school, and Intellisense made coding so much more... explorable and approachable.
We're not at the scale of Uber yet, but at Jupiter (YC S19) we're using tech like Bazel, protos, and Kotlin to prepare us for when we are: https://starship.jupiter.co/jupiter-stack/
We're actively hiring for a senior SWE right now, so feel free to shoot me a note if you're looking.
Protos is more of a faster & smaller network serialization tradeoff vs. ease of development thing than a company scale thing. And a thing to do if you want to integrate with gRPC.
I worked at a company that used protos from day one for example.
Many large companies are still using JSON w/ schemas as their network serialization layer just fine.
Yes - Bazel has excellent protobuf integration compared to other build tools (we were originally using Gradle). We're compiling Kotlin to the JVM, so the build chain is similar to Java's.
Our Docker Compose local dev setup started to break down once we had to model more complicated production behavior locally – things like table-specific Postgres roles for audit logs and dynamically provisioned databases per-clinical-trial. We were drifting toward a bespoke Bash mess to keep dev and prod in sync.
Our core idea was instead to embrace Terraform in the local dev environment too. We were already using Terraform heavily in prod, and Terraform's robust provider ecosystem meant that we could e.g. substitute Docker containers for RDS and MinIO for S3 without deviating too far from our production configuration.
The really fun part is how we use Terraform to handle dynamic provisioning, which we need for isolated, per-clinical-trial databases. The way we do it in prod is by giving each clinical trial its own, isolated Terraform state, stored in a cloud storage bucket. By writing an equivalent local Terraform config for this provisioning step, we enable the app to run the same `terraform apply` command as it does in prod to locally to spin up a new database, with the individual state for that new db stored in a local MinIO bucket... which is itself created by the original `terraform apply` that sets up the initial local dev infrastructure.
Altogether, Terraform gives us a super high-fidelity local environment that lets us test complex application behavior and infrastructure logic without the full overhead of spinning up a local k8s cluster (which is what I imagine the next best alternative might be?). It's readable, declarative, and required no new tooling on our side since we were already using Docker and Terraform anyways.
Curious to hear how other folks are managing complex local dev setups, especially if you're not on Kubernetes!