|  View source on GitHub | 
Hyperparameters used in ModelFitPipeline.
tfr.keras.pipeline.PipelineHparams(
    model_dir: str,
    num_epochs: int,
    steps_per_epoch: int,
    validation_steps: int,
    learning_rate: float,
    loss: Union[str, Dict[str, str]],
    loss_reduction: str = tf.losses.Reduction.AUTO,
    optimizer: str = 'adam',
    loss_weights: Optional[Union[float, Dict[str, float]]] = None,
    steps_per_execution: int = 10,
    automatic_reduce_lr: bool = False,
    early_stopping_patience: int = 0,
    early_stopping_min_delta: float = 0.0,
    use_weighted_metrics: bool = False,
    export_best_model: bool = False,
    best_exporter_metric_higher_better: bool = False,
    best_exporter_metric: str = 'loss',
    strategy: Optional[str] = None,
    cluster_resolver: Optional[tf.distribute.cluster_resolver.ClusterResolver] = None,
    variable_partitioner: Optional[tf.distribute.experimental.partitioners.Partitioner] = None,
    tpu: Optional[str] = ''
)
Hyperparameters to be specified for ranking pipeline.
| Attributes | |
|---|---|
| model_dir | A path to output the model and training data. | 
| num_epochs | An integer to specify the number of epochs of training. | 
| steps_per_epoch | An integer to specify the number of steps per epoch. When it is None, going over the training data once is counted as an epoch. | 
| validation_steps | An integer to specify the number of validation steps in each epoch. Note that a mini-batch of data will be evaluated in each step and this is the number of steps taken for validation in each epoch. | 
| learning_rate | A float to indicate the learning rate of the optimizer. | 
| loss | A string or a map to strings that indicate the loss to be used. When lossis a string, all outputs and labels will be trained with the same
loss. Whenlossis a map, outputs and labels will be trained with losses
implied by the corresponding keys. | 
| loss_reduction | An option in tf.keras.losses.Reductionto specify the
reduction method. | 
| optimizer | An option in tf.keras.optimizersidentifiers to specify the
optimizer to be used. | 
| loss_weights | None or a float or a map to floats that indicate the relative weights for each loss. When not specified, all losses are applied with the same weight 1. | 
| steps_per_execution | An integer to specify the number of steps executed in each operation. Tuning this to optimize the training performance in distributed training. | 
| automatic_reduce_lr | A boolean to indicate whether to use ReduceLROnPlateaucallback. | 
| early_stopping_patience | Number of epochs with no improvement after which training will be stopped. | 
| early_stopping_min_delta | Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than early_stopping_min_delta, will count as no improvement. | 
| use_weighted_metrics | A boolean to indicate whether to use weighted metrics. | 
| export_best_model | A boolean to indicate whether to export the best model
evaluated by the best_exporter_metricon the validation data. | 
| best_exporter_metric_higher_better | A boolean to indicate whether the best_exporter_metricis the higher the better. | 
| best_exporter_metric | A string to specify the metric used to monitor the training and to export the best model. Default to the 'loss'. | 
| strategy | An option of strategies supported in strategy_utils. Choose from
["MirroredStrategy", "MultiWorkerMirroredStrategy",
"ParameterServerStrategy", "TPUStrategy"]. | 
| cluster_resolver | A cluster_resolver to build strategy. | 
| variable_partitioner | Variable partitioner to be used in ParameterServerStrategy. | 
| tpu | TPU address for TPUStrategy. Not used for other strategy. | 
Methods
__eq__
__eq__(
    other
)