The counter argument I would say is that having all these integer ids comes with many problems. You can't make em public cause they leak info. They are not unique across environments. Meaning you have to spin up a lot of bs envs to just run it. But retros are for complaining about test envs, right?
Uuid4 are only 224bits is a bs argument. Such a made up problem.
But a fair point is that one should use a sequential uuid to avoid fragmentation. One that has a time part.
Some additional cases we encounter quite often where UUIDs help:
- A client used to run our app on-premises and now wants to migrate to the cloud.
- Support engineers want to clone a client’s account into the dev environment to debug issues without corrupting client data.
- A client wants to migrate their account to a different region (from US to EU).
Merging data using UUIDs is very easy because ID collisions are practically impossible. With integer IDs, we'd need complex and error-prone ID-rewriting scripts. UUIDs are extremely useful even when the tables are small, contrary to what the article suggests.
If merging or moving data between environments is a regular occurrence, I agree it would be best to have non-colliding primary keys. I have done an environment move (new DB in different AWS region) with integers and sequences for maybe a 100 table DB and it’s do-able but a high cost task. At that company we also had the demo/customer preview environment concept where we needed to keep the data but move it.
Sounds like something a buyer would say. Surely Netflix can handle HBO traffic better and cheaper. Maybe HBO are stuck in some deals. But it is a no-brainer to trash the HBO backend over time.
Not missing working with LUA in proxies. I think this is no big thing. They rolled back the change fairly quickly. Still bad but that outage mid November was worse since it was many bad decisions stacking up and it took too long time to resolve.
The cost of tokens in the docs is pretty much a worthless metric for these models. Only way to go is to plug it in and test it. My experience is that Claude is an expert at wasting tokens on nonsense. Easily 5x up on output tokens comparing to ChatGPT and then consider that Claude waste about 2-3x of tokens more by default.
This is spot on. The amount of wasteful output tokens from Claude is crazy. The actual output you're looking for might be better, but you're definitely going to pay for it in the long run.
The other angle here is that it's very easy to waste a ton of time and tokens with cheap models. Or you can more slowly dig yourself a hole with the SOTA models. But either way, and even with 1M tokens of context - things spiral at some point. It's just a question of whether you can get off the tracks with a working widget. It's always frustrating to know that "resetting" the environment is just handing over some free tokens to [model-provider-here] to recontextualize itself. I feel like it's the ultimate Office Space hack, likely unintentional, but really helps drive home the point of how unreliable all these offerings are.
So they made a newbie mistake in SQL that would not even pass an AI review. They did not verify the change in a test environment. And I guess the logs are so full of errors it is hard to pinpoint which matters. Yikes.
I care less and less about things like this. At some point you will write code in some lang that have fancy keywords and stuff gets mutated anyway. Also, what people tends to do if stuff is immutable is that they hide mutation by doing deep copies with changes.
This battle was lost a looong time ago. The effort it takes to keep up with all the shenanigans of Google and that play store is way worse than these new changes.
Uuid4 are only 224bits is a bs argument. Such a made up problem.
But a fair point is that one should use a sequential uuid to avoid fragmentation. One that has a time part.
reply