Tests shows that Python's math.gcd
is one order faster than naive Euclidean algorithm implementation:
import math
from timeit import default_timer as timer
def gcd(a,b):
while b != 0:
a, b = b, a % b
return a
def main():
a = 28871271685163
b = 17461204521323
start = timer()
print(gcd(a, b))
end = timer()
print(end - start)
start = timer()
print(math.gcd(a, b))
end = timer()
print(end - start)
gives
$ python3 test.py
1
4.816000000573695e-05
1
8.346003596670926e-06
e-05
vs e-06
.
I guess there is some optimizations or some other algorithm?
math.gcd()
is certainly a Python shim over a library function that is running as machine code (i.e. compiled from "C" code), not a function being run by the Python interpreter. See also: Where are math.py and sys.py?This should be it (for CPython):
math_gcd(PyObject *module, PyObject * const *args, Py_ssize_t nargs)
in
mathmodule.c
and it calls
_PyLong_GCD(PyObject *aarg, PyObject *barg)
in
longobject.c
which apparently uses Lehmer's GCD algorithm
The code is smothered in housekeeping operations and handling of special case though, increasing the complexity considerably. Still, quite clean.