I am trying to create a simple audio recognition to spot key words. Since my data set is small i am performing transfer learning. This is how the graph looks. Following this link i created a module. And here is the code
import tensorflow_hub as hub
import tensorflow as tf
# pylint: disable=unused-import
from tensorflow.contrib.framework.python.ops import audio_ops as contrib_audio
# pylint: enable=unused-import
def module_fn():
input_name = "Reshape:0"
output_name = "Reshape_2:0"
graph_def = tf.GraphDef()
with open('my_frozen_graph.pb', "rb") as f:
graph_def.ParseFromString(f.read())
input_ten=tf.placeholder(tf.float32, shape = (1, 98, 40))
output_ten,=tf.import_graph_def(graph_def, input_map = {input_name: input_ten}, return_elements = [output_name])
hub.add_signature(inputs = input_ten, outputs = output_ten)
spec = hub.create_module_spec(module_fn)
module = hub.Module(spec)
with tf.Session() as session:
module.export('test_module',session)
Although it does executed and created a 'test_module' folder.
test_module
|--> assets
|--> variables
|--> saved_model.pb
|--> tfhub_module.pb
How ever I have few questions
The variables folder is empty. Not sure if this is how it supposed to be ?
input_ten=tf.placeholder(tf.float32, shape = (1, 98, 40))
Is this correct ? 98X48 are the image size and first tuple usually represent batch size. Should it be kept as '1' or for unknown batch size 'None' ?After loading the module into the script
height, width = hub.get_expected_image_size('test_module')
is giving me an error.
Let me try to answer your questions in turn.
If the graph def you build your model from is indeed frozen (i.e., all variables have been replaced by constants), there are no variables that need writing to the checkpoint that is commonly located at variables/variables*. So this looks explicable to me. -- That said, Hub modules would give you a way to avoid freezing graph defs: call the original graph building code in the module_fn, and restore pre-trained variables in the session before calling Module.export().
For your type of module, you get to make the rules. ;-) Hub Modules can accommodate all sorts of input and output shapes, including partially or fully unknown shapes. An input placeholder like above will have to have a shape that is compatible with the graph you are plugging it into. That graph, in turn, will use shapes that work with the convolutions it is doing. Generally speaking, it is often useful to use the leading dimension for batch size and leave that unspecified (
None
).hub.get_expected_image_size() is meant for use with image inputs. I would avoid it here.