Right, but note you can't even split it, if you are thinking of linear circuits. Precision necessarily means how your signal compared to the thermal noise floor. It is possible to show can't compose 8-bit precision linear units to get a >8-bit precision value. What happens is actually the opposite, if the noise of the units are uncorrelated noise will propagate and increase to the tune of sqrt(number of operations). Avoiding error propagation is another advantage of digital operations.
The reason NNs don't exhibit strong error propagation is because of the non-linearities between linear layers that perform operations analogous to threshold/majority voting or the like, which have error correction properties.
Interesting, but then how do you explain that rectified linear operations between layers work better than sigmoids?
According to your logic, ReLU should have worse error propagating quality than squashing functions?
The reason NNs don't exhibit strong error propagation is because of the non-linearities between linear layers that perform operations analogous to threshold/majority voting or the like, which have error correction properties.