Are compilers getting better at optimizing code over time, and if so at what rate?

129 Views Asked by At

We know for example that Moores law states that the number of transistors on a chip doubles every 1.8-2 years (and hence computing power has been approximately increasing at this rate). This got me thinking about compiler optimizations. Are compilers getting better a making codes run faster as time goes on? If they are is there any theory as to how this performance increase scales? If I were to take a piece of code written in 1970 compiled with 1970 compiler optimizations would that same code run faster on the same machine but compiled with todays optimizations? Can I expect a piece a code written today to run faster in say a 100 years solely as the result of better optimizations/compilers (obviously independent of improvements in hardware and algorithm improvements)?

1

There are 1 best solutions below

0
On BEST ANSWER

This is a complex, multi-faceted question, so let me try to hit on a few key points:

  • Compiler optimization theory is highly complex and is often (far) more difficult than the actual design of the language in the first place. This domain incorporates many other complex mathematical subdomains (eg, directed graph theory). Some problems in compiler optimization theory are known to be NP-complete or even undecidable (which represent the most complex categories of problems to solve).
  • While there are hundreds of known techniques (see here, for example) the implementation of these techniques is highly dependent on both the computer language and the targeted CPU (such as instruction set and pipelines). Because computer languages and CPUs are constantly evolving, the optimal implementations of even well-known techniques can change over time. New CPU features and architectures can also open up previously unavailable optimization techniques. Some of the most cutting-edge techniques may also be proprietary and thus not available to the general public for reuse. For example, several commerical JVMs offer specialty optimizations to the JIT-compilation of Java bytecode which are quantitatively superior to (default) open-source JVMs on a statistical basis.
  • There is an unmistakable historical trend toward better and better compiler optimization. This is why, for example, it is quite rare nowadays that any manual assembly coding is done regularly. But due to the factors already discussed (and others), the evolution of the efficiency and benefits provided by automatic compiler optimizations has been quite non-linear historically. This is in contrast to the fairly consistent curvature of Moore's law and other laws relating to computer hardware improvements. Compiler optimization's track record is probably better visualized as a line with many "fits and starts". Because the factors driving the non-linearity of compiler optimization theory will not likely change in the immediate future, it's likely this trajectory will remain non-linear for at least the near future.
  • It would be quite difficult to state even an average rate of improvement when languages themselves are coming and going, not to mention CPU models with different hardware features coming and going. CPUs have evolved different instruction sets and instruction set extensions over time, so it's quite difficult to even do an "apples to apples" comparison. This is true regardless of which metric you use: program length in terms of discrete instructions, program execution time (highly dependent on CPU clock speed and pipelining capabilities), or others.
  • Compiler optimization theory is probably now in the regime of diminishing returns. That is to say that most of the "low hanging" fruit have been addressed and much of the remaining optimizations are either quite complex or provide relatively small marginal improvements. Perhaps the greatest coming factor which will disruptively impact compiler optimization theory will be the advent of weak (or strong) AI. Because many of the future gains in compiler optimization theory will require highly complex predictive capabilities, the best optimizers will actually have some level of innate intelligence (for example, to predict the most common user inputs, to predict the most common execution paths, and to reduce NP-hard optimization problems into solvable sub-problems, etc.). It could very well be possible in the future that every piece of software you use is specifically custom compiled just for you in a tailored way to your specific use cases, interests, and requirements. Imagine that your OS (operating system) is specifically compiled or re-compiled just for you based on your specific use cases as a scientist vs. a video gamer vs. a corporate executive, or old vs. young, or any other combination of demographics that potentially impact code execution.