Module: tfmot.quantization.keras.default_8bit.default_8bit_transforms
Stay organized with collections
Save and categorize content based on your preferences.
Module containing 8bit default transforms.
Classes
class ConcatTransform
: Transform for Concatenate. Quantize only after concatenation.
class ConcatTransform3Inputs
: Transform for Concatenate. Quantize only after concatenation.
class ConcatTransform4Inputs
: Transform for Concatenate. Quantize only after concatenation.
class ConcatTransform5Inputs
: Transform for Concatenate. Quantize only after concatenation.
class ConcatTransform6Inputs
: Transform for Concatenate. Quantize only after concatenation.
class Conv2DBatchNormActivationQuantize
: Transform to be applied to "Conv2D" + "BatchNorm" + "ReLU" Graph.
class Conv2DBatchNormQuantize
: Transform to be applied to "Conv2D" + "BatchNorm" Graph.
class Conv2DBatchNormReLUQuantize
: Transform to be applied to "Conv2D" + "BatchNorm" + "ReLU" Graph.
class Conv2DReshapeBatchNormActivationQuantize
: Transform to be applied to "Conv2D" + "Reshape" + "BatchNorm" + "ReLU" Graph.
class Conv2DReshapeBatchNormQuantize
: Transform to be applied to "Conv2D" + "Reshape" + "BatchNorm" Graph.
class Conv2DReshapeBatchNormReLUQuantize
: Transform to be applied to "Conv2D" + "Reshape" + "BatchNorm" + "ReLU" Graph.
class InputLayerQuantize
: Quantizes InputLayer, by adding QuantizeLayer after it.
class LayerReLUQuantize
: Transform to be applied to "Add"+ "ReLU" Graph.
class LayerReluActivationQuantize
: Transform to be applied to "Add"+ "ReLU" Graph.
class SeparableConv1DQuantize
: Add QAT support for Keras SeparableConv1D layer.
class SeparableConvQuantize
: Break SeparableConv into a DepthwiseConv and Conv layer.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-05-26 UTC."],[],[],null,["# Module: tfmot.quantization.keras.default_8bit.default_8bit_transforms\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/api/quantization/keras/default_8bit/default_8bit_transforms/__init__.py) |\n\nModule containing 8bit default transforms.\n\nClasses\n-------\n\n[`class ConcatTransform`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/ConcatTransform): Transform for Concatenate. Quantize only after concatenation.\n\n[`class ConcatTransform3Inputs`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/ConcatTransform3Inputs): Transform for Concatenate. Quantize only after concatenation.\n\n[`class ConcatTransform4Inputs`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/ConcatTransform4Inputs): Transform for Concatenate. Quantize only after concatenation.\n\n[`class ConcatTransform5Inputs`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/ConcatTransform5Inputs): Transform for Concatenate. Quantize only after concatenation.\n\n[`class ConcatTransform6Inputs`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/ConcatTransform6Inputs): Transform for Concatenate. Quantize only after concatenation.\n\n[`class Conv2DBatchNormActivationQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DBatchNormActivationQuantize): Transform to be applied to \"Conv2D\" + \"BatchNorm\" + \"ReLU\" Graph.\n\n[`class Conv2DBatchNormQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DBatchNormQuantize): Transform to be applied to \"Conv2D\" + \"BatchNorm\" Graph.\n\n[`class Conv2DBatchNormReLUQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DBatchNormReLUQuantize): Transform to be applied to \"Conv2D\" + \"BatchNorm\" + \"ReLU\" Graph.\n\n[`class Conv2DReshapeBatchNormActivationQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DReshapeBatchNormActivationQuantize): Transform to be applied to \"Conv2D\" + \"Reshape\" + \"BatchNorm\" + \"ReLU\" Graph.\n\n[`class Conv2DReshapeBatchNormQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DReshapeBatchNormQuantize): Transform to be applied to \"Conv2D\" + \"Reshape\" + \"BatchNorm\" Graph.\n\n[`class Conv2DReshapeBatchNormReLUQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/Conv2DReshapeBatchNormReLUQuantize): Transform to be applied to \"Conv2D\" + \"Reshape\" + \"BatchNorm\" + \"ReLU\" Graph.\n\n[`class InputLayerQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/InputLayerQuantize): Quantizes InputLayer, by adding QuantizeLayer after it.\n\n[`class LayerReLUQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/LayerReLUQuantize): Transform to be applied to \"Add\"+ \"ReLU\" Graph.\n\n[`class LayerReluActivationQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/LayerReluActivationQuantize): Transform to be applied to \"Add\"+ \"ReLU\" Graph.\n\n[`class SeparableConv1DQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/SeparableConv1DQuantize): Add QAT support for Keras SeparableConv1D layer.\n\n[`class SeparableConvQuantize`](../../../../tfmot/quantization/keras/default_8bit/default_8bit_transforms/SeparableConvQuantize): Break SeparableConv into a DepthwiseConv and Conv layer."]]