The main reason the MonomorphismRestriction exists is to avoid unexpected duplication of work.
One example is the following function:
len2 xs = (len, len)
where len = genericLength xs
Which has the inferred type (Num b, Num c) => [a] -> (b, c) which can unfortunately lead to situations where things like uncurry (+) . len2 actually computes genericLength twice despite it really only needing to compute it once.
However even without MonomorphismRestriction, the safer type Num b => [a] -> (b, b) is correctly inferred when you turn on MonoLocalBinds.
Given that MonoLocalBinds is automatically turned on with GADTs, with TypeFamilies and SHOULD be automatically turned on with OVERLAPS (but currently is not). It seems as though having MonoLocalBinds replace MonomorphismRestriction in the long run would make since. Particularly since it seems that there is a general consensus that TypeFamilies and GADTs are incredibly useful and might eventually be made standard.
Now certain things like:
foo = 8 ^ 8 ^ 8
bar = foo + foo
Still risk duplication. But I would argue that such expressions are not common enough in real code to warrant the pitfalls of MonomorphismRestriction.
Also if the foo is exported in the above example then I would argue it is correct to give it the type Num a => a even with the duplication risk, and if it is not exported then it seems like it should not be hard for the optimizer to remove the duplication within bar.
On a side note is there any chance Haskell will implement type specific memoization at some point in the future? Because that seems like it would very convenient for polymorphic top level computations.