A layer (....) which is an input to the Conv operator producing the output array model/re_lu_1/Relu, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.
Batch Normalization Quantize Tensorflow 1.x does not have MinMax information
916 Views Asked by dtlam26 At
1
There are 1 best solutions below
Related Questions in TENSORFLOW
- swagger ui not working for swagger version 2
- Swagger: How do you add ApiModelProperty for 3rd party code?
- Make model-schema capture element addition on an array field request
- Swagger Dropwizard 0.7 - TextArea for JSON parameter not displayed
- Swagger apis not displaying for some classes
- Document Restful API created in Node.JS
- Leverage MultipleApiVersions in Swagger with attribute versioning
- Camel-swagger and Hawt.io incompatibility
- How to display Swagger JSON file in my MEAN stack project using swagger-ui module?
- Impose max limit in Loopback
Related Questions in TENSORFLOW-LITE
- swagger ui not working for swagger version 2
- Swagger: How do you add ApiModelProperty for 3rd party code?
- Make model-schema capture element addition on an array field request
- Swagger Dropwizard 0.7 - TextArea for JSON parameter not displayed
- Swagger apis not displaying for some classes
- Document Restful API created in Node.JS
- Leverage MultipleApiVersions in Swagger with attribute versioning
- Camel-swagger and Hawt.io incompatibility
- How to display Swagger JSON file in my MEAN stack project using swagger-ui module?
- Impose max limit in Loopback
Related Questions in BATCH-NORMALIZATION
- swagger ui not working for swagger version 2
- Swagger: How do you add ApiModelProperty for 3rd party code?
- Make model-schema capture element addition on an array field request
- Swagger Dropwizard 0.7 - TextArea for JSON parameter not displayed
- Swagger apis not displaying for some classes
- Document Restful API created in Node.JS
- Leverage MultipleApiVersions in Swagger with attribute versioning
- Camel-swagger and Hawt.io incompatibility
- How to display Swagger JSON file in my MEAN stack project using swagger-ui module?
- Impose max limit in Loopback
Related Questions in QUANTIZATION
- swagger ui not working for swagger version 2
- Swagger: How do you add ApiModelProperty for 3rd party code?
- Make model-schema capture element addition on an array field request
- Swagger Dropwizard 0.7 - TextArea for JSON parameter not displayed
- Swagger apis not displaying for some classes
- Document Restful API created in Node.JS
- Leverage MultipleApiVersions in Swagger with attribute versioning
- Camel-swagger and Hawt.io incompatibility
- How to display Swagger JSON file in my MEAN stack project using swagger-ui module?
- Impose max limit in Loopback
Related Questions in QUANTIZATION-AWARE-TRAINING
- swagger ui not working for swagger version 2
- Swagger: How do you add ApiModelProperty for 3rd party code?
- Make model-schema capture element addition on an array field request
- Swagger Dropwizard 0.7 - TextArea for JSON parameter not displayed
- Swagger apis not displaying for some classes
- Document Restful API created in Node.JS
- Leverage MultipleApiVersions in Swagger with attribute versioning
- Camel-swagger and Hawt.io incompatibility
- How to display Swagger JSON file in my MEAN stack project using swagger-ui module?
- Impose max limit in Loopback
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
For tensorflow 1.x, if you want to quantize, you have to place it with fake quantization nodes to activate the quantization of the model. There are 3 phases of quantization:
However, the most important factor is the configuration of batch_normalization in the model. After trying multiple configuration, the best one is using batch_normalization without fused option from
tensorflow.keras.layers
. The reason is because Tensorflow want to avoid the folding result to be quantized. Therefore, activation behind batchnorm wont work. Details in [here][1]In short, this layer should be attached only under
tensorflow.keras.layers.Conv2D
with parsed activation param, which is Relu/Relu6/IdentityIf you conduct the above process: Conv2d=>Activation=>BatchNorm
the layer will not yield errors
does not have MinMax information