Not trying to detract from the OP, but to provide a combination I have running in production for 2 years.
- aws cdk (with outputs)
- https://github.com/zappa/Zappa
- a python script which stitches outputs from cdk into the zappa config
- extra python scripts to do a few small things post deployment. Zappa has some bugs that would be tedious to fix vs 100 lines of python.
Our product has the luxury of only being used during stock market hours and really thrives in serverless everything. We use rds serverless (v2) along with lambdas.
Some of our work is heavy and we’ll dynamically spin up an ecs container which has the hallmarks of a normal django app: redis + celery queues. We try to saturate the ecs container resources with this type of setup. After the container is done, it’ll shutdown.
I was super skeptical of this 2 years ago. 4 envs costing ~$2k/month. I would do this setup again if the product warrants windowed usage.
Not this specifically, but I think I’m starting to see a pattern. Redis, elastic, openai, wordpress. Each came to realize that the open source benefit has a glaring issue with their corporate interests.
I do wonder if Tim Apple would be so bold as to stop selling in the EU for all the rules and regulations that they have been required to follow. It would be amazing to see that play out imo. Like a reverse boycott could happen?
The secondary strategy Timmy could pull is a dumbed down ecosystem in the EU which I’m sure is even worse for everyone. But hey, you’d be able to buy audible books straight in the app. Is it worth it?
Sure, there are companies that choose not to be in a particular jurisdiction, like Meta has long chosen not to be in China.
But it’s a lot easier to make that choice early when you don’t need to give up on revenue. In Apple’s case, leaving EU would mean losing around $90 billion in annual revenue and ceding a large chunk of their global market share to Android. It’s really hard to justify that kind of thing to shareholders by “we didn’t want to open some APIs in our operating system.”
It might be easier to justify than that, though - the EU has shown an increasing willingness to dictate; Apple may project that out to causing more than $90B/yr in harm or slowed growth in the global business.
I resonate with this topic. Checking your own repos on a new computer is one thing… inheriting someone else’s project and running it on your machine in the node ecosystem is very rough.
It has the advantage of using .vue files which I enjoy. Oh and guess what… it has code splitting because you have to define what components the page needs ;).
I used https://acquire.com/ in this case. I looked at maybe 80 companies just from the surface. Then inquired about 15-20 of them for more details. After about a week of thinking, zeplo wouldn’t leave my mind. There was one other close contender but zeplo was more present in my mind and had better metrics.
From there it was letter of intent, due diligence, asset agreement, escrow, and asset transfer. Hopefully this helps.
Disclaimer: not a dba so my terms might not be appropriate
I’ve seen uuid4 which replaces the first 4 bytes with a timestamp. It was mentioned to me that this strategy allows postgres to write at the end of the index instead of arbitrarily on disk. I also presume it means it has some decent sorting.
The common databases don't support natively support generating ULIDs to my knowledge. You can usually find extensions if you prefer generating them in the database instead of the application. I generate them in the application, and store them as a UUID in PostgreSQL to avoid needing any database extensions.
It also has the advantage that the page being written to, the right most leaf at the end of the index, is likely to always be available in the page cache. With random you may need to constantly go to disk to fetch the page.
In this sequential UUIDs idea, I wonder how big of a deal it is if the prefix part wraps around often? E.g. using a timestamp-based prefix with 2 bytes, if you increase the prefix every 60 seconds, the prefix will be reset every 45 days or so (60 * 1000 * 2^16) according to that README. Does it make sense to fine tune this value based on the use case or what?
Are there any clear downsides to sequential prefixes on UUIDs? I would imagine if you're producing new objects at a high enough rate, you'd have a lot of prefix collisions, which would hinder search times. I've never benchmarked to confirm that though.
If the prefix is incremented for every new ID, you essentially have the same problem as you do with serial: you leak information about the amount of rows in some timeframe.
As the link posted above mentions, you can alternatively use a timestamp-based prefix that wraps around after all the bits have been used. This one still leaks possible creation times of the record, so it's on par or better compared to UUIdv6, ULID, etc. (because here the exact creation time can't necessarily be deduced).
In all of these UUID solutions apart from the fully random v4, you are trading of the better index performance with some level of information leakage about the record the ID is associated with.
This was a reason I left a job. The company I worked for was focused on government RFPs. During the Obama administration there were promises to support small minority and vet owned businesses. The company was failing to win any contract of substance. So one day we bid on a contract to provide dell servers. Out of desperation, the other members of the team decided to go lower than business could allow.
Turns out we finally won something… which we weren’t ‘designed’ to win. Dell came screaming in on the phone asking what the hell we were doing. Dell had written the RFP and ‘partnered’ with a small company to basically win the contract outright. That’s when I learned the system was rigged and it still favored big companies and their selected friends.
- aws cdk (with outputs) - https://github.com/zappa/Zappa - a python script which stitches outputs from cdk into the zappa config - extra python scripts to do a few small things post deployment. Zappa has some bugs that would be tedious to fix vs 100 lines of python.
Our product has the luxury of only being used during stock market hours and really thrives in serverless everything. We use rds serverless (v2) along with lambdas.
Some of our work is heavy and we’ll dynamically spin up an ecs container which has the hallmarks of a normal django app: redis + celery queues. We try to saturate the ecs container resources with this type of setup. After the container is done, it’ll shutdown.
I was super skeptical of this 2 years ago. 4 envs costing ~$2k/month. I would do this setup again if the product warrants windowed usage.