tf.data.experimental.service.DispatchServer
Stay organized with collections
Save and categorize content based on your preferences.
An in-process tf.data service dispatch server.
tf.data.experimental.service.DispatchServer(
port, protocol=None, start=True
)
A tf.data.experimental.service.DispatchServer
coordinates a cluster of
tf.data.experimental.service.WorkerServer
s. When the workers start, they
register themselves with the dispatcher.
dispatcher = tf.data.experimental.service.DispatchServer(port=0)
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
port=0, dispatcher_address=dispatcher_address)
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data dispatch process, use join() to block
indefinitely after starting up the server.
dispatcher = tf.data.experimental.service.DispatchServer(port=5050)
dispatcher.join()
Args |
port
|
Specifies the port to bind to.
|
protocol
|
(Optional.) Specifies the protocol to be used by the server.
Acceptable values include "grpc", "grpc+local" . Defaults to "grpc" .
|
start
|
(Optional.) Boolean, indicating whether to start the server after
creating it. Defaults to True .
|
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
creating the TensorFlow server.
|
Attributes |
target
|
Returns a target that can be used to connect to the server.
dispatcher = tf.data.experimental.service.DispatchServer(port=0)
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
The returned string will be in the form protocol://address, e.g.
"grpc://localhost:5050".
|
Methods
join
View source
join()
Blocks until the server has shut down.
This is useful when starting a dedicated dispatch process.
dispatcher = tf.data.experimental.service.DispatchServer(port=5050)
dispatcher.join()
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
joining the server.
|
start
View source
start()
Starts this server.
dispatcher = tf.data.experimental.service.DispatchServer(port=0,
start=False)
dispatcher.start()
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
starting the server.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.data.experimental.service.DispatchServer\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L28-L141) |\n\nAn in-process tf.data service dispatch server. \n\n tf.data.experimental.service.DispatchServer(\n port, protocol=None, start=True\n )\n\nA [`tf.data.experimental.service.DispatchServer`](../../../../tf/data/experimental/service/DispatchServer) coordinates a cluster of\n[`tf.data.experimental.service.WorkerServer`](../../../../tf/data/experimental/service/WorkerServer)s. When the workers start, they\nregister themselves with the dispatcher. \n\n dispatcher = tf.data.experimental.service.DispatchServer(port=0)\n dispatcher_address = dispatcher.target.split(\"://\")[1]\n worker = tf.data.experimental.service.WorkerServer(\n port=0, dispatcher_address=dispatcher_address)\n dataset = tf.data.Dataset.range(10)\n dataset = dataset.apply(tf.data.experimental.service.distribute(\n processing_mode=\"parallel_epochs\", service=dispatcher.target))\n print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nWhen starting a dedicated tf.data dispatch process, use join() to block\nindefinitely after starting up the server. \n\n dispatcher = tf.data.experimental.service.DispatchServer(port=5050)\n dispatcher.join()\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------|--------------------------------------------------------------------------------------------------------------------------------------|\n| `port` | Specifies the port to bind to. |\n| `protocol` | (Optional.) Specifies the protocol to be used by the server. Acceptable values include `\"grpc\", \"grpc+local\"`. Defaults to `\"grpc\"`. |\n| `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to `True`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|-----------------------------------------------------------|-----------------------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while creating the TensorFlow server. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `target` | Returns a target that can be used to connect to the server. \u003cbr /\u003e dispatcher = tf.data.experimental.service.DispatchServer(port=0) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode=\"parallel_epochs\", service=dispatcher.target)) The returned string will be in the form protocol://address, e.g. \"grpc://localhost:5050\". |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `join`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L88-L102) \n\n join()\n\nBlocks until the server has shut down.\n\nThis is useful when starting a dedicated dispatch process. \n\n dispatcher = tf.data.experimental.service.DispatchServer(port=5050)\n dispatcher.join()\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|-----------------------------------------------------------|-----------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while joining the server. |\n\n\u003cbr /\u003e\n\n### `start`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L75-L86) \n\n start()\n\nStarts this server. \n\n dispatcher = tf.data.experimental.service.DispatchServer(port=0,\n start=False)\n dispatcher.start()\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|-----------------------------------------------------------|------------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while starting the server. |\n\n\u003cbr /\u003e"]]