Here's a Haskell version that uses a single loop to do the normalizing, and has a main that doesn't use do syntax. This was mostly a nice little brainteaser:
mapNormalize takes a function, and produces a function that runs that function with the normalized input. It runs in a single iteration, rather than 2 maps. Credit to `dylex` for much hand holding on a single iteration normalizer :)
Because Haskell is evaluated lazily, two maps doesn't imply two iterations over the data. In this case, I would expect only one pass through the data for both of the maps.
Out of curiosity, I did a quick criterion benchmark for both functions. Here's the result on a list of 100 elements:
(As an aside, is there something in particular you dislike about do syntax? I find that it often makes things more readable, though slightly more verbose.)
I am aware of that, as I said this was more a mental exercise. normalizeMap is harder to understand, but it was interesting to write - it wasn't meant to be a stab at your code or claim it's better :)
In terms of do syntax, I try and avoid that because I find it detracts from the overall flow of data. With: "putStrLn . spark . map read =<< getArgs" I find it easy to see that main doesn't do much other than transform the users input. maybe a personal preference thing though...
https://gist.github.com/1367709
mapNormalize takes a function, and produces a function that runs that function with the normalized input. It runs in a single iteration, rather than 2 maps. Credit to `dylex` for much hand holding on a single iteration normalizer :)