View source on GitHub |
Run options for experimental_distribute_dataset(s_from_function)
.
tf.distribute.InputOptions(
experimental_prefetch_to_device=True
)
This can be used to hold some strategy specific configs.
# Setup TPUStrategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
dataset = tf.data.Dataset.range(16)
distributed_dataset_on_host = (
strategy.experimental_distribute_dataset(
dataset,
tf.distribute.InputOptions(
experimental_prefetch_to_device=False)))
Attributes | |
---|---|
experimental_prefetch_to_device
|
Boolean. Currently only applies to TPUStrategy. Defaults to True. If True, dataset elements will be prefetched to accelerator device memory. When False, dataset elements are prefetched to host device memory. Must be False when using TPUEmbedding API. |