The amount of dirty pages the kernel keeps in RAM is configurable through sysctl. If the buffer is full any further write blocks the process. If you have less free RAM than the allowed dirty page buffer what you said is correct though.
There is a new patch[1] that allows to set a soft and hard minimum of RAM reserved for clean pages. This fixes the problem almost completely even under the heaviest loads.
cgroups like a silbling post mentioned can also help by setting soft limits for heavy background tasks like compile jobs. Setting a soft limit will give them as much RAM as is available, or swap them out completely when things are heavily contented, effectively pausing the processes. It requires some setting up though, so it's not a solution for all cases, but it can make sense even without the thrashing problem.
There is a new patch[1] that allows to set a soft and hard minimum of RAM reserved for clean pages. This fixes the problem almost completely even under the heaviest loads.
cgroups like a silbling post mentioned can also help by setting soft limits for heavy background tasks like compile jobs. Setting a soft limit will give them as much RAM as is available, or swap them out completely when things are heavily contented, effectively pausing the processes. It requires some setting up though, so it's not a solution for all cases, but it can make sense even without the thrashing problem.
[1]: https://github.com/hakavlad/le9-patch