I understood the concept of automatic differentiation, but couldn't find any explanation how tensorflow calculates the error gradient for non differentiable functions as for example tf.where in my loss function or tf.cond in my graph. It works just fine, but I would like to understand how tensorflow backpropagates the error through such nodes, since there is no formula to calculate the gradient from them.
How does tensorflow handle non differentiable nodes during gradient calculation?
1.6k Views Asked by Natjo At
1
There are 1 best solutions below
Related Questions in PYTHON
- new thread blocks main thread
- Extracting viewCount & SubscriberCount from YouTube API V3 for a given channel, where channelID does not equal userID
- Display images on Django Template Site
- Difference between list() and dict() with generators
- How can I serialize a numpy array while preserving matrix dimensions?
- Protractor did not run properly when using browser.wait, msg: "Wait timed out after XXXms"
- Why is my program adding int as string (4+7 = 47)?
- store numpy array in mysql
- how to omit the less frequent words from a dictionary in python?
- Update a text file with ( new words+ \n ) after the words is appended into a list
- python how to write list of lists to file
- Removing URL features from tokens in NLTK
- Optimizing for Social Leaderboards
- Python : Get size of string in bytes
- What is the code of the sorted function?
Related Questions in TENSORFLOW
- (Tensorflow)Does the op assign change the gradient computation?
- Tensorflow Windows Accessing Folders Denied:"NewRandomAccessFile failed to Create/Open: Access is denied. ; Input/output error"
- Android App TensorFlow Google Cloud ML
- Convert Tensorflow model to Caffe model
- Google Tensorflow LSTMCell Variables Mapping to Hochreiter97_lstm.pdf paper
- additive Gaussian noise in Tensorflow
- TFlearn evaluate method results meaning
- Regularization losses Tensorflow - TRAINABLE_VARIABLES to Tensor Array
- feed picture to model tensorflow for training
- Fail to read the new format of tensorflow checkpoint?
- I got a error when running a github project in tensorflow
- Tensorflow R0.12 softmax_cross_entropy_with_logits ASSERT Error
- RuntimeError in run_one_batch of TensorFlowDataFrame in tensorflow
- Same output in neural network for each input after training
- ConvNet : Validation Loss not strongly decreasing but accuracy is improving
Related Questions in BACKPROPAGATION
- Torch Lua: Why is my gradient descent not optimizing the error?
- Feedforward network using backpropagation in Encog
- How do I create a back propagation neural network that has different kinds of output?
- Derivation of the Backpropagation Algorithm for Neural Networks
- I have trouble implementing backpropagation in neural net
- Neural networks and large data sets
- Is Levenberg–Marquardt a type of Backpropagation algorithm?
- Siamese Net BackProp, how to effectively update?
- Implementation of backpropagation algorithm
- Weights in feed-forward backpropogation ANN not changing
- Encog Backpropagation Error not changing
- Backpropagation: networkerror of one testinput rises, the others go down, whats wrong?
- My matlab neural network backpropagation algorithm seems buggy
- Neural network function converges to y=1
- LSTM RNN Backpropagation
Related Questions in AUTOMATIC-DIFFERENTIATION
- How to get more performance out of automatic differentiation?
- minimal Numeric.AD example won't compile
- If i can compute the gradient and Hessian, will Newtons method significantly outperform BFGS/L-BFGS?
- Automatic differentiation library in Scheme / Common Lisp / Clojure
- how to use promote rule in julia?
- Is sympy solve compatible with tensorflow GradientTape?
- How does TensorFlow compute the gradient of vgg19.preprocess_input?
- What is the correct way to use a PyTorch Module inside a PyTorch Function?
- How to compute in tensorflow the partial derivative of each observation from the output w.r.t each input observation
- What is the correct way of defining a differentiable function with scalar arguments that returns a vector/array?
- Back propagation in pytorch to calculate component wise derivatives of vector
- Accurately finding the gradient of values from 2 vectors
- How can I get exhaust list of functions available to grad_fn for backward() function in Pytorch?
- How can I fix this problem of automated differentiation with Tensorflow?
- Tensorflow loses track of variables/gradients after multiplication with constant tensor
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
In the case of
tf.where, you have a function with three inputs, conditionC, value on trueTand value on falseF, and one outputOut. The gradient receives one value and has to return three values. Currently, no gradient is computed for the condition (that would hardly make sense), so you just need to do the gradients forTandF. Assuming the input and the outputs are vectors, imagineC[0]isTrue. ThenOut[0]comes fromT[0], and its gradient should propagate back. On the other hand,F[0]would have been discarded, so its gradient should be made zero. IfOut[1]wereFalse, then the gradient forF[1]should propagate but not forT[1]. So, in short, forTyou should propagate the given gradient whereCisTrueand make it zero where it isFalse, and the opposite forF. If you look at the implementation of the gradient oftf.where(Selectoperation), it does exactly that:Note the input values themselves are not used in the computation, that will be done by the gradients of the operation producing those inputs. For
tf.cond, the code is a bit more complicated, because the same operation (Merge) is used in different contexts, and alsotf.condalso usesSwitchoperations inside. However the idea is the same. Essentially,Switchoperations are used for each input, so the input that was activated (the first if the condition wasTrueand the second otherwise) gets the received gradient and the other input gets a "switched off" gradient (likeNone), and does not propagate back further.