I have recently decided to abandon the 20-year old unsupported vanilla ANSI C libraries with which I had done previous modeling work in favor of Keras, using Theano as a backend. I'm going to want to be able to model similar things in Theano that I could do in C.
A typical modeling scenario would be to train a network to map bidirectionally between A and B via one or more hidden layers:
A <==> h1 <==> hn <==> B
My research involves mapping between different types of representations (for example, you can hear the word "dog" and know what the letters look like, or you can read the letters DOG and know what those letters sound like). In this architecture, both A and B are sometimes inputs and sometimes outputs. After reading through the Sequential model description, it's not clear that Keras allows this, since it explicitly has a first layer -- and so sometimes A would be the first layer, but sometimes B would be the first layer. Is it possible for a Keras layer to be both an input and an output layer?
Reverting a model doesn't seem to be straightforward.
Imagine a fully connected neuron that does the following operation when going forward:
Now when trying to go backwards, even if you know the weights and bias, there are infinite solutions for i1, i2, and i3. They're three variables for just one equation.
So, perhaps a possible approach (not sure about what could happen about this...) is creating a model that accept both inputs at the same time, and also outputs both values.
You could input one as an actual value and the other as zeros, or something like that.