About time/space complexity in C/C++ standards

840 Views Asked by At

Recently I've read things about abstract machine and as-if rule (What exactly is the "as-if" rule?), and the requirements on time complexity of standard library (like this one: Is list::size() really O(n)?).

  1. Are the time/space complexity requirements on standard library in terms of abstract machine or in terms of real concrete machine?

If these are in terms of abstract machine, it seems an implementation can actually generate less efficient code in terms of complexity even though it seems not to be practical.

  1. Did the standards mention anything about time/space complexity for non-standard-library code?

e.g. I may write a custom sorting code and expect O(n log n) time, but if an implementation just treats this as code in abstract machine, it is allowed to generate a slower sorting in assembly and machine code, like changing it to O(n^2) sort, even though it unlikely will do that in real situation.

Or maybe I missed something about the transformation requirements between abstract machine and real concrete machine. Can you help me to clarify? :)

Even thought I mainly read things about C++ standard, I also want to know the situation about C standard. So this question tags both.

3

There are 3 best solutions below

2
On

As you've been told in comments, the standards don't have any requirements regarding time or space complexity. And addressing your additional implicit question, yes, a compiler can change your O(n log n) code to run in O(n²) time. Or in O(n!) if it wants to.

The underlying explanation is that the standard defines correct programs, and a program is correct regardless of how long it takes to execute or how much memory it uses. These details are left to the implementation.

Specific implementations can compile your code in whichever way achieves correct behavior. It would be completely permissible, for instance, for an implementation to add a five-second delay between every line of code you wrote — the program is still correct. It would also be permissible for the compiler to figure out a better way of doing what you wrote and rewriting your entire program, as long as the observable behavior is the same.

However, the fact that an implementation is compliant doesn't mean it is perfect. Adding five-second delays wouldn't affect the implementation's compliance, but nobody would want to use that implementation. Compilers don't do these things because they are ultimately tools, and as such, their writers expect them to be useful to those who use them, and making your code intentionally worse is not useful.

TL;DR: bad performance (time complexity, memory complexity, etc.) doesn't affect compliance, but it will make you look for a new compiler.

1
On
  1. Are the time/space complexity requirements on standard library in terms of abstract machine or in terms of real concrete machine?

The complexity requirements are in terms of the abstract machine:

[intro.abstract] The semantic descriptions in this document define a parameterized nondeterministic abstract machine...


  1. Did the standards mention anything about time/space complexity for non-standard-library code?

No. The only complexity requirements in the standard are for standard containers and algorithms.

if an implementation just treats this as code in abstract machine, it is allowed to generate a slower sorting in assembly and machine code, like changing it to O(n^2) sort

That's not the worst thing it can do. An implementation can put the CPU to sleep for a year between every instruction. As long as you're patient enough, the program would have same observable behaviour as the abstract machine, so it would be conforming.

0
On

Many of the complexity requirements in the C++ standard are in terms of specific counts of particular operations. These do constrain the implementation.

E.g. std::find_if

At most last - first applications of the predicate.

This is more specific than "O(N), where N = std::distance(first, last)", as it specifies a constant factor of 1.

And there are others that have Big-O bounds, defining what operation(s) are counted

E.g. std::sort

O(N·log(N)), where N = std::distance(first, last) comparisons.

What this doesn't constrain includes how slow a comparison is, nor how many swaps occur. If your model of computation has fast comparison and slow swapping, you don't get a very useful analysis.