Retrieve FTRL embedding parameters with debug support.
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Nested Classes
| class | RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.Options | Optional attributes for RetrieveTPUEmbeddingFTRLParametersGradAccumDebug | |
Constants
| String | OP_NAME | The name of this op, as known by TensorFlow core engine | 
Public Methods
| Output<TFloat32> | 
accumulators()
                
                   Parameter accumulators updated by the FTRL optimization algorithm. | 
| static RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.Options | 
config(String config)
                
               | 
| static RetrieveTPUEmbeddingFTRLParametersGradAccumDebug | 
create(Scope scope, Long numShards, Long shardId, Options... options)
                
                   Factory method to create a class wrapping a new RetrieveTPUEmbeddingFTRLParametersGradAccumDebug operation. | 
| Output<TFloat32> | 
gradientAccumulators()
                
                   Parameter gradient_accumulators updated by the FTRL optimization algorithm. | 
| Output<TFloat32> | 
linears()
                
                   Parameter linears updated by the FTRL optimization algorithm. | 
| Output<TFloat32> | 
parameters()
                
                   Parameter parameters updated by the FTRL optimization algorithm. | 
| static RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.Options | 
tableId(Long tableId)
                
               | 
| static RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.Options | 
tableName(String tableName)
                
               | 
Inherited Methods
Constants
public static final String OP_NAME
The name of this op, as known by TensorFlow core engine
Public Methods
public Output<TFloat32> accumulators ()
Parameter accumulators updated by the FTRL optimization algorithm.
public static RetrieveTPUEmbeddingFTRLParametersGradAccumDebug create (Scope scope, Long numShards, Long shardId, Options... options)
Factory method to create a class wrapping a new RetrieveTPUEmbeddingFTRLParametersGradAccumDebug operation.
Parameters
| scope | current scope | 
|---|---|
| options | carries optional attributes values | 
Returns
- a new instance of RetrieveTPUEmbeddingFTRLParametersGradAccumDebug
public Output<TFloat32> gradientAccumulators ()
Parameter gradient_accumulators updated by the FTRL optimization algorithm.
public Output<TFloat32> parameters ()
Parameter parameters updated by the FTRL optimization algorithm.