The Postgres TEXT type is limited to 65,535 bytes, to give a concrete number. "Big" usually means, big enough that you have to stream rather than sending all at once.
You're right. Bad Google suggestion for "max string length postgres" that pointed to https://hevodata.com/learn/postgresql-varchar/ saying varchar max is 64KB, which is also wrong. I gotta stick to the official docs. Anyway, TEXT is the one we care about.
In storage you measure things in blocks. Historically, blocks were meant to be 512 bytes big, but today the tendency is to make them bigger, 4K would be the typical size in server setting.
So, the idea here is this: databases that store structured information, i.e. such that needs to store integers, booleans, short strings are typically something like relational databases, eg. PostgreSQL.
Filesystems (eg. Ext4) usually think about whole blocks, but are designed with the eye for smaller files, i.e. files aren't expected to be more than some ten or hundred blocks in size for optimal performance.
Object stores (eg. S3) are the kinds of storage systems that are supposed to work well for anything larger than typical files.
This gives the answer to your question: blobs in a relational database are probably OK if they are under one block big. Databases will be probably able to handle bigger ones too, but you will start seeing serious drops in performance when it comes to indexing, filtering, searching etc. because such systems optimize internal memory buffers in such a way that they can fit a "perfect" number of elements of the "perfect" size.
Another concern here is that with stored elements larger than single block you need a different approach to parallelism. Ultimately, the number of blocks used by an I/O operation determines its performance. If you are reading/writing sub-block sized elements, you try to make it so that they come from the same block to minimize the number of requests made to the physical storage. If you work with multi-block elements, your approach to performance optimization is different -- you try to pre-fetch the "neighbor" blocks because you expect you might need them soon. Modern storage hardware has a decent degree of parallelism that allows you to queue multiple I/O requests w/o awaiting completion. This later mechanism is a lot less relevant to something like RDBMS, but is at the heart of an object store.
In other words: the problem is not the function of the size of the database. In principle, nothing stops eg. PostgreSQL from special-casing blobs and dealing with them differently than it would normally do with "small" objects... but they aren't probably interested in doing so because you already have appropriate storage for that kind of stuff, and PostgreSQL, like most other RDBMS sits on top of the storage for larger objects (filesystem), so they have no hopes of doing it better than the layer below them.
Most of what you wrote there is simply not true for modern DBMS, specifically PostgreSQL has a mechanism called TOAST (https://www.enterprisedb.com/postgres-tutorials/postgresql-t...) that does exactly what you claim "they aren't probably interested in doing" and completely eliminates any performance penalty of large objects in a table when they are not used.
PostgreSQL just like most RDBMS uses filesystem as a backend. It cannot be faster than the filesystem it uses to store its data. At best, you may be able to configure it to use more caching, and then it will be "faster" when you have enough memory for caching...
What does that have to do with my comment? I didn't say say that an RDBMS is faster than a filesystem, just that your statements about them "seeing serious drops in performance when it comes to indexing, filtering, searching etc." are clearly wrong.
It seems very clear that it is you who knows some vaguely relevant bit of trivia, but actually have no clue about the subject in general, and try to make up for that in arrogance. Which is, honestly, pretty embarrassing.
"Filesystems (eg. Ext4) usually think about whole blocks, but are designed with the eye for smaller files, i.e. files aren't expected to be more than some ten or hundred blocks in size for optimal performance."
Sorry what?
I mean, ext4, as a special case, has some performance issues around multiple writers to a singe file when doing direct io, but I can't think of a single other place where your statement true... and plenty where it's just not ( xfs, jfs, zfs, ntfs, refs )
God, how do you come up with this nonsense? Can you read what you reply to, or do you just write this to show off because you happened to know some vaguely relevant bit of trivia, but actually have no clue about the subject in general?
Yes, the goal of filesystem is to perform best when files are greater than one block and smaller than some couple hundreds of blocks. This is what it's optimized for. Can it deal with smaller or bigger files? -- Yes, but this is beside the point. Filesystem by design are meant for the sizes I mentioned. It's stupid to store information in files much smaller than single block because filesystems store a lot of file metadata per file. If your files are too small, then you start paying exorbitant price for metadata (filesystem don't optimize for storing metadata in bulk for multiple files because such optimization would mean a lot of synchronization when dealing with multiple files in parallel).
Similarly, filesystems, by and large aren't good for dealing with large chunks of data, like, eg. database backups, VM images etc. Typically, a storage system for large objects will try to optimize its performance by larger-than-block compression and deduplication. Neither makes sense when your target chunk size is in single to double digits of blocks -- there won't be enough overlap between different data chunks and you will pay more for decompression of data that you don't want to read, if the granularity of compressed chunks is too big.
And this is not a secret at all... talk to anyone who works on either database or filesystem or object store -- they will tell you exactly this. (I worked on a filesystem, for the record.) This is why these things all co-exist and are filling their respective niches...
How many BLOBs does one need to have, and how often to we need to touch them for this solution to become untenable ?