Awhile back I had to move a MariaDB database that should have been tiny. When we actually went to look at it, it was over 400GB and obviously the first question was WTF. The developers had decided to log to a table which did make for really simple ways to query the logs, but caused us to have to get a bit creative in the migration (which was actually kind of fun in the end). I'm sure it's been done many times, but it's the first time I had run into it.
Ohh I've seen similar situations. I recall wanting to dump a complex shared dev database, to make it easier to test destructive migrations locally in a container with no risk (that particular DBMS didn't support transactional DDL all that well).
The problems started when I realized that it was far too large to be feasible... until I discovered that about 80-90% of all data was log tables, which in my case I could just skip and export everything else instead.
Now, the logging implementation there used the EAV pattern and got somewhat complicated after years of development and maintenance... however it was mostly okay (in addition to some traditional logging into files and logrotate).
That said, personally I'd use either a specialized log aggregation solution, or at the very least would store logs in a completely separate data store, both for resiliency and security reasons.