I am defining a custom layer as the last one of my network. Here I need to convert a tensor, the input one, into a numpy array to define a function on it. In particular, I want to define my last layer similarly to this:
import tensorflow as tf
def hat(x):
A = tf.constant([[0.,-x[2],x[1]],[x[2],0.,-x[0]],[-x[1],x[0],0.]])
return A
class FinalLayer(layers.Layer):
def __init__(self, units):
super(FinalLayer, self).__init__()
self.units = units
def call(self, inputs):
p = tf.constant([1.,2.,3.])
q = inputs.numpy()
p = tf.matmul(hat(q),p)
return p
The weights do not matter to my question, since I know how to manage them. The problem is that this layer works perfectly in eager mode, however with this option the training phase is to slow. My question is: is there something I can do to implement this layer without eager mode? So, alternatively, can I access the single component x[i] of a tensor without converting it into a numpy array?
You can rewrite your
hat
function a bit differently, so it accepts a Tensor instead of anumpy
array. For example:Will results in