Because Haskell is evaluated lazily, two maps doesn't imply two iterations over the data. In this case, I would expect only one pass through the data for both of the maps.
Out of curiosity, I did a quick criterion benchmark for both functions. Here's the result on a list of 100 elements:
(As an aside, is there something in particular you dislike about do syntax? I find that it often makes things more readable, though slightly more verbose.)
I am aware of that, as I said this was more a mental exercise. normalizeMap is harder to understand, but it was interesting to write - it wasn't meant to be a stab at your code or claim it's better :)
In terms of do syntax, I try and avoid that because I find it detracts from the overall flow of data. With: "putStrLn . spark . map read =<< getArgs" I find it easy to see that main doesn't do much other than transform the users input. maybe a personal preference thing though...
(As an aside, is there something in particular you dislike about do syntax? I find that it often makes things more readable, though slightly more verbose.)