Normally the dtype is hidden when it's equivalent to the native type:
>>> import numpy as np
>>> np.arange(5)
array([0, 1, 2, 3, 4])
>>> np.arange(5).dtype
dtype('int32')
>>> np.arange(5) + 3
array([3, 4, 5, 6, 7])
But somehow that doesn't apply to floor division or modulo:
>>> np.arange(5) // 3
array([0, 0, 0, 1, 1], dtype=int32)
>>> np.arange(5) % 3
array([0, 1, 2, 0, 1], dtype=int32)
Why is there a difference?
Python 3.5.4, NumPy 1.13.1, Windows 64bit
You actually have multiple distinct 32-bit integer dtypes here. This is probably a bug.
NumPy has (accidentally?) created multiple distinct signed 32-bit integer types, probably corresponding to C
intandlong. Both of them display asnumpy.int32, but they're actually different objects. At C level, I believe the type objects arePyIntArrType_TypeandPyLongArrType_Type, generated here.dtype objects have a
typeattribute corresponding to the type object of scalars of that dtype. It is thistypeattribute that NumPy inspects when deciding whether to printdtypeinformation in an array'srepr:On
numpy.arange(5)andnumpy.arange(5) + 3,.dtype.typeisnumpy.int_; onnumpy.arange(5) // 3ornumpy.arange(5) % 3,.dtype.typeis the other 32-bit signed integer type.As for why
+and//have different output dtypes, they use different type resolution routines. Here's the one for//, and here's the one for+.//'s type resolution looks for a ufunc inner loop that takes types the inputs can be safely cast to, while+'s type resolution applies NumPy type promotion to the arguments and picks the loop matching the resulting type.