Sorta? I mean I want my problem fixed, regardless of it it's a person or not. Having a person listen to me complain about my problems might sooth my conscience, but I can't pay my bill or why was it so high; having those answered by a system that is contextualized to my problem sand is empowered to fix it, and not just a talking to a brick wall? I wouldn't say totally fine, but at the end of the day, if my problem is solved or my query, even if it's weird, I can't say I really needed for the voice on the other end of the pHone to come from a human. If a companies business model isn't sustainable without using AI agents, it's not really my problem that it's not, but also if I'm using their product, presumably I don't want that to go away.
Isn't the real scary thing here that the AI agent is empowered to control your life?
You're imagining that if you get the answer you want from the AI you hang up the phone, and if you don't you're imagining a human will pick up and have the political power and will to overrule the AI. I think it's more realistic is the way things have played out here: nobody took or had any responsibility because "they made the AI responsible" and second-guessing that choice isn't second-guessing the AI, it's second-guessing the human leaders who decreed that human support had no value. This with the result that the humans would let the AI go about as far as setting fire to the building before some kind of human element imbued with any real accountability steps in. The evidence of this is all the ignored requests presented here.
Function names are limited. E.g. can't provide a circuit diagram of what you're controlling in a function name. But you can do that in a comment (either with ASCII art or an image link).
In addition to that, if the Why ever changes (maybe the issue was in an external dependency that finally got patched), you'd have to update the name or else leave it incorrect. Mildly annoying if just in one codebase, but a needlessly breaking change if that function is exported.
Agreed. So why not stuff as much as possible into the name before resorting to a comment? Prose looks ugly as a name but the utility is not diminished.
That embeds the "why" into your API. If it ever changes, the function no longer serves as an abstraction over that underlying reason & changing the function name breaks your API.
That's not to say embed nothing into the names. I'm quite fond of the "Long Names are Long" blog post[1]: names need to clearly refer to what the named thing does, and precise enough to exclude stuff it doesn't do. Names can certainly get too short, e.g. the C "sprint fast" function `sprintf` is probably too short to be easily understood.
> React performance concerns in the real world are typically measured in, at worst, hundreds of milliseconds.
Many years ago I worked at a wonderful company that made the terrible decision to rebuild the frontend in React. We had a performance dashboard permanently displayed on a TV in the office, prominently showing the p99 time-to-interactive of our home page. It sat at TWENTY SECONDS for at least 2 years. No progress was ever made, to the best of my knowledge.
This was an e-commerce site, more or less. As per the author's reasoning, it absolutely should not have been an SPA.
The only client side component of interest would be filtering and sorting, (although the server could render the new state too). I would choose traditional server side + a little bit of client side code here.
It is often the case that the nifty Python thing you want to pass around uses one or more nifty Python libraries that have no C/C++/Rust/Golang equivalent (or no obvious equivalent), and so rewriting it becomes a herculean task.
They’re not wrong. If you’ve ever spent meaningful time administering both, you’ll know that Postgres takes far more hands-on work to keep it going.
To be clear, I like both. Postgres has a lot more features, and is far more extensible. But there’s no getting around the fact that its MVCC implementation means that at scale, you have to worry about things that simply do not exist for MySQL: vacuuming, txid wraparound, etc.
I doubt it was true in 2012, because sysadmins would be the ones trying to make it run reliably, including things like replication, upgrades, etc.
Pretty sure that even in 2012 MySQL had very easy to use replication, which Postgres didn't have well into the late 2010s (does it today? It's been a while since I've ran any databases).
> I doubt it was true in 2012, because sysadmins would be the ones trying to make it run reliably, including things like replication, upgrades, etc.
Possibly I got it wrong and switched around which was easier on the devs and which was easier on the sysads?
In my defence, ISTR, when talking to sysads about MySQL vs PostgreSQL, they preferred the latter due to having less to worry about once deployed (MySQL would apparently magically lose data sometimes).
MyISAM in the olden days could/would magically lose data. InnoDB has been the de facto standard for a while and I haven't seen data loss attributed to it.
In 2012 MySQL had several flavors of replications, each with its own very serious pitfalls that could introduce corruption or loss of data. I saw enough of MySQL replication issues in those days that I wouldn't want to use it.
But sure, it was easy to get a proof of concept working. But when you tried to break it by turning off network and/or machines, then shit broke down in very broken ways that was not recoverable. I'm guessing most that set up MySQL replication didn't actually verify that it worked well when SHTF.
> pitfalls that could introduce corruption or loss of data
sometimes, repairing broken data is easier than, say, upgrading a god damn hot DB.
MVCC is overrated. Not every row in a busy MySQL table is your transactional wallet balance. But to upgrade a DB you have to deal with every field every row every table, and data keeps changing, which is a real headache
Fixing a range of broken data, however, can be done by a junior developer. If you rely on rdbms for a single source of truth you are probably fucked anyway.
My experience has been exactly opposite. Ability to do Vacuums is good. MySQL doesn’t free up space taken by deleted rows. The only option to free up the space is to mysqldump the db and load it again. Not practical in most of the situations.
VACUUM rarely reclaims space from the OS’ perspective, if that’s what you meant. It can in certain circumstances, but they’re rare. VACUUM FULL is the equivalent to OPTIMIZE TABLE – both lock the table to do a full rewrite, and optimally binpack it to the extent that is posssible.
EDIT: my mistake, OPTIMIZE TABLE is an online DDL. I’ve been burned in the past from foreign key constraint metadata locks essentially turning it into a blocking operation.
That helps a lot thanks. Will summarize it quickly for those who come later: MySQL (InnoDB really) and Postgres both use MVCC, so they write a new row on update. InnoDB however also additionally writes a record marking the old row for deletion.
To do a cleanup, InnoDB uses the records it kept to delete old data, while Postgres must do a scan. So InnoDB pays a record-keeping price as part of the update that makes it easier to clear data, while Postgres decides to pay this price of occasional scanning.
I don't know how VACUUM works, I couldn't tell you about the differences.
The OPTIMIZE works almost exclusively with online DDL statements. There's only a brief table lock held during table metadata operations, but I haven't found that to be a problem in practice. (https://dev.mysql.com/doc/refman/8.4/en/optimize-table.html#...)
Not in around 15 years. You're thinking of when MyISAM was the default storage engine for MySQL. It has been InnoDB for over a decade. InnoDB is very reliable - I've never had a single data loss incident in all that time, and I've managed some very large (PB-scale) and active databases.
Postgres is definitely more difficult to administer.
MySQL used to have horrible and very unsafe defaults for new installations that persisted well after the introduction of InnoDB. Those went unfixed for a very long time.
I recall this being the case A LOOOONG time ago but I haven't heard of, read about, been warned to look out for or personally seen such a thing in forever. Have you?
* I'm running a lot of MySQL stuff and such a topic might be of interest to me
Yes, it is messy when you want your MySQL databases to be mission critical in production, e.g. handling a large amount of customer data. Historically MySQL's High Availability architecture has a lot of design and implementation issues because it was an afterthought. Dealing with large amount of critical data means you need it to be performant, reliable and available at the same time, which is hard and requires you to deal with caching, sharding, replication, network issues, zone/resource planning, failovers, leader elections and semi-sync bugs, corrupted logs, manually fixing bad queries that killed the database, data migration, version upgrades, etc. There is a reason why big corps like Google/Meta has dedicated teams of experts (like people who actually wrote the HA features) to maintain their mission critical MySQL deployments.
From what I can tell, MySQL is supposed to be safe since 2018 if you have no data from before 2010.
The fact that you still can't use DDL in transactions makes life exceedingly painful, but it's technically safe if you write your migration code carefully enough.
Some places still have columns declared as utf8 instead of utf8mb4, and there's a special place in hell for authors of the MySQL general clusterfuck regarding encodings - it was all nice and great if you didn't care about anything other than latin1 or ASCII - go outside that before utf8 option and it was horror that even experienced operators managed to fuckup (I have a badge from a Google conference in 2017 with nicely visible effect of "we have mixed up one of the three separate encoding settings in MySQL and now you have mojibake in your badge").
And then there's UTF8 not actually being UTF8, which can result in total lockup of a table if someone inputs a character that does not fit in UCS-2 and now you need to recover the database from backup and preferably convert all instances of utf8 to utf8mb4, because fuck you that's why.
In fairness, reasoning about collations is like peering into the abyss. I get why they’re required to have so many levels of detail, and the Unicode Consortium has done a fantastic job, but to say they’re complicated is putting it mildly.
Oracle also didn't support Boolean data types for a long time, and had a 20 some odd year public thread arguing that no one needed a Boolean data type (https://asktom.oracle.com/ords/f?p=100:11:0::::P11_QUESTION_...). They finally added it in Oracle 23 which is nice, but I wouldn't consider it to be in good company to be lacking something Oracle also lacks.
Not having a boolean data type is IMHO just an annoyance, not comparable to the lack of transactional DDL.
But to the point, people often use this point to claim that MySQL is a toy database, not usable for real world production use. I use Oracle as a counterpoint, which also has a lot of warts but is pretty much an archetype of an enterprise-grade DB engine.
Early MySQL versions made egregious design choices like quietly ignoring missing foreign keys and enum typos, truncating long strings, and randomly choosing rows from groups.
Yeah, it was bad. What kills me is SQLite has its own absurd set of gotchas [0] yet is seen as amazing and wonderful by devs. PKs can have NULLs? Sure! Strings can have \0 in the middle of them? Why not? FKs aren’t enforced by default? Yeah, who needs referential integrity, anyway?
My only conclusion is that the majority of devs don’t actually read documentation, and rely purely on the last blog post they read to influence their infrastructure decisions.
While that change from LGPL to GPL affected only the client library (server always was GPL(+commercial)) and the MySQL company relatively quickly reacted with a FOSS exception to the GPL and by providing a reimplementation of the client library under PHP license (mysqlnd) to serve that market.
(I joined MySQL shortly after that mess, before the Sun acquisition)
They also didn't like updating software - to likely that update to PHP or MySQL or something broke some bad script by a customer, who'd complain to the host.
I am a database specialist and have worn the DBA had for many years. I have run MySQL and Postgres in production, both self-hosted and using managed services. Postgres wins on every single dimension that matters, every time. Yes MySQL is easier to setup for non-experts. That counts for nothing.
If you are sticking up for MySQL in this thread... I just don't even know, man.
Right now I rely on TabsOutliner when using Chrome (which I only use for work). It lets me keep 400+ tabs open and stay sane. I like it so much that I've paid for it 3 times, and would have paid a 4th time but it seems you can't anymore, and I fear it's abandoned.
In any case, this is how I work. I use browser tabs as a kind of short- to medium-term memory. I open a bunch of things, and keep them open for as long as I might plausibly need them. To me this is just normal. I don't know how anyone lives with only 10 or 20 tabs open, or a 50 tabs in a single window. How could you remember anything? But without TabsOutliner or something like it, this becomes a sprawling mess, because the browser gives you no native means to search it, or "zoom out".
Unfortunately TabsOutliner isn't available for Firefox, which I use when I have a choice. So seeing SavaOS promote Chrome... I lose a little interest right away. If it doesn't work in Firefox it's not worth getting excited about, because Chrome as a piece of software treats me like an enemy and I don't like that. So: support Firefox!
That said, if SavaOS gives me the capability to organize my tabs, maybe treat them like files I can put in directories etc etc, that sounds awesome and I want to try it. At the very least maybe it's better than TabsOutliner.
I completely understand what you mean regarding Firefox. While everything already works fine there, the extension isn’t available yet. We had to make a decision regarding the tab management feature (tab grouping), which posed some challenges for Firefox.
That said, Firefox extension support is on the way! Even if we start with a stripped-down version focused on 'bookmarking' for now. We can’t be a privacy-centric company without supporting Firefox, that’s for sure.
This is true, and in principle a good thing, but in the time since Parquet and ORC were created GDPR and CCPA are things that have come to exist. Any format we build in that space, today, needs to support in-place record-level deletion.
Yea so the thing you do for this is called "compaction", where you effectively merge the original + edits/deletes into a new immutable file. You then change your table metadata pointer to point at the new compacted file, and delete the old files from S3.
Due to the way S3 and the ilk are structured as globally replicated KV stores, you're not likely to get in-place edits anytime soon, and until the cost structure incentivizes otherwise you're going to continue to see data systems that preference immutable cloud storage.
You can avoid that if you save only per-user encrypted content (expensive, I know). That way you just should have to revoke that key to remove access to the data. Advantage is you cannot forget any old backup etc.
yeah actually it does mean that