When reading such extreme numbers, I'm always thinking what I may be doing wrong, when my MSSQL based CRUD application warms up its caches with around 600.000 rows and it takes 30 seconds to load them from DB into RAM on my 4x3GHz machine :-D
Yes - OLAP database are built with a completely different performance tradeoff. The way data is stored and the query planner are optimised for exactly these types of queries.
If you're working in an oltp system, you're not necessarily doing it wrong, but you may wish to consider exporting the data to use in an OLAP tool if you're frequently doing big queries. And nowadays there's ways to 'do both ' e.g. you can run the duckdb query engine within a postgres instance
Maybe? Don't know. I never had problemes bulk uploading into Postgres tho, it's right there in documentation and I don't have to have a weird executable on my corporately castrated laptop
But yeah if you are using python and loading row by row, or a large amount into a large table that has a clustered index, chances are that it'll be dead slow but that's expected.
Maybe I'm missing something fundamental here