Normally the dtype
is hidden when it's equivalent to the native type:
>>> import numpy as np
>>> np.arange(5)
array([0, 1, 2, 3, 4])
>>> np.arange(5).dtype
dtype('int32')
>>> np.arange(5) + 3
array([3, 4, 5, 6, 7])
But somehow that doesn't apply to floor division or modulo:
>>> np.arange(5) // 3
array([0, 0, 0, 1, 1], dtype=int32)
>>> np.arange(5) % 3
array([0, 1, 2, 0, 1], dtype=int32)
Why is there a difference?
Python 3.5.4, NumPy 1.13.1, Windows 64bit
You actually have multiple distinct 32-bit integer dtypes here. This is probably a bug.
NumPy has (accidentally?) created multiple distinct signed 32-bit integer types, probably corresponding to C
int
andlong
. Both of them display asnumpy.int32
, but they're actually different objects. At C level, I believe the type objects arePyIntArrType_Type
andPyLongArrType_Type
, generated here.dtype objects have a
type
attribute corresponding to the type object of scalars of that dtype. It is thistype
attribute that NumPy inspects when deciding whether to printdtype
information in an array'srepr
:On
numpy.arange(5)
andnumpy.arange(5) + 3
,.dtype.type
isnumpy.int_
; onnumpy.arange(5) // 3
ornumpy.arange(5) % 3
,.dtype.type
is the other 32-bit signed integer type.As for why
+
and//
have different output dtypes, they use different type resolution routines. Here's the one for//
, and here's the one for+
.//
's type resolution looks for a ufunc inner loop that takes types the inputs can be safely cast to, while+
's type resolution applies NumPy type promotion to the arguments and picks the loop matching the resulting type.