I have two numpy arrays of shape (436, 1024, 2)
. The last dimension (2
) represents 2D vectors. I want to compare the 2D vector of the two numpy arrays element-wise in order to find the average angular error.
To do so I want to use the dot product which works perfectly fine when looping over the two first dimensions of the arrays (for
loop in python can be slow). Therefore I would like to use a numpy function.
I found that np.tensordot
allows to perform dot product element-wise. However, I do not succeed to use its axes
argument:
import numpy as np
def average_angular_error_vec(estimated_oc : np.array, target_oc : np.array):
estimated_oc = np.float64(estimated_oc)
target_oc = np.float64(target_oc)
norm1 = np.linalg.norm(estimated_oc, axis=2)
norm2 = np.linalg.norm(target_oc, axis=2)
norm1 = norm1[..., np.newaxis]
norm2 = norm2[..., np.newaxis]
unit_vector1 = np.divide(estimated_oc, norm1)
unit_vector2 = np.divide(target_oc, norm2)
dot_product = np.tensordot(unit_vector1, unit_vector2, axes=2)
angle = np.arccos(dot_product)
return np.mean(angle)
I have the following error :
ValueError: shape-mismatch for sum
Below is my function which computes the average angular error correctly:
def average_angular_error(estimated_oc : np.array, target_oc : np.array):
h, w, c = target_oc.shape
r = np.zeros((h, w), dtype="float64")
estimated_oc = np.float64(estimated_oc)
target_oc = np.float64(target_oc)
for i in range(h):
for j in range(w):
unit_vector_1 = estimated_oc[i][j] / np.linalg.norm(estimated_oc[i][j])
unit_vector_2 = target_oc[i][j] / np.linalg.norm(target_oc[i][j])
dot_product = np.dot(unit_vector_1, unit_vector_2)
angle = np.arccos(dot_product)
r[i][j] = angle
return np.mean(r)
The problem is likely much simpler than you are making it. If you apply
np.tensordot
to a pair of arrays of shape(w, h, 2)
along the last axis, you will get a result of shape(w, h, w, h)
. This is not what you want. There are three simple approaches here. In addition to showing the options, I've shown a few tips and tricks for making the code simpler without changing any of the basic functionality:Do the sum-reduction manually (using
+
and*
):Use
np.matmul
(a.k.a.@
) with properly broadcasted dimensions:np.matmul
andnp.dot
both require the last dimension of the first array to match the second to last of the second, like with normal matrix multiplication.None
is an alias fornp.newaxis
, which introduces a new axis of size 1 at the location of your choice. In this case, I made the first array(w, h, 1, 2)
and the second(w, h, 2, 1)
. That ensures that the last two dimensions are multiplied as a transposed vector and a regular vector at every corresponding element.Use
np.einsum
:You can't use
np.dot
ornp.tensordot
for this.dot
andtensordot
keep the untouched dimensions of both arrays, as explained earlier.matmul
broadcasts them together, which is what you want.