The biggest problem I see is the planner not knowing the cardinality of columns in a big table after a join or predicate has been applied. You see this especially with aggregate queries rather than point queries.
That is, it decides that a sequential scan would be just peachy even though there's an inner join in the mix which in practice reduces the set of responsive rows, if it just constructed the join graph that way. The quickest route out of this is disabling sequential scan, but there's no hint to do that on a per-query basis. The longer route is hiding bits of the query in CTEs so the optimizer can't rewrite too much (CTEs which need MATERIALIZED nowadays since PG got smarter).
High total cardinality but low dependent cardinality - dependent on data in other tables, or with predicates applied to other tables - seems hard to capture without dynamic monitoring of query patterns and data access. I don't think PG does that; if it did, I think they'd sell it hard. It comes up with application-level constraints which relate to the data distribution across multiple tables.
That is, it decides that a sequential scan would be just peachy even though there's an inner join in the mix which in practice reduces the set of responsive rows, if it just constructed the join graph that way. The quickest route out of this is disabling sequential scan, but there's no hint to do that on a per-query basis. The longer route is hiding bits of the query in CTEs so the optimizer can't rewrite too much (CTEs which need MATERIALIZED nowadays since PG got smarter).
High total cardinality but low dependent cardinality - dependent on data in other tables, or with predicates applied to other tables - seems hard to capture without dynamic monitoring of query patterns and data access. I don't think PG does that; if it did, I think they'd sell it hard. It comes up with application-level constraints which relate to the data distribution across multiple tables.