tf.data.experimental.service.WorkerServer
Stay organized with collections
Save and categorize content based on your preferences.
An in-process tf.data service worker server.
tf.data.experimental.service.WorkerServer(
port, dispatcher_address, worker_address=None, protocol=None, start=True
)
A tf.data.experimental.service.WorkerServer
performs tf.data.Dataset
processing for user-defined datasets, and provides the resulting elements over
RPC. A worker is associated with a single
tf.data.experimental.service.DispatchServer
.
dispatcher = tf.data.experimental.service.DispatchServer(port=0)
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
port=0, dispatcher_address=dispatcher_address)
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data worker process, use join() to block
indefinitely after starting up the server.
worker = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker.join()
Args |
port
|
Specifies the port to bind to. A value of 0 indicates that the
worker can bind to any available port.
|
dispatcher_address
|
Specifies the address of the dispatcher.
|
worker_address
|
(Optional.) Specifies the address of the worker server.
This address is passed to the dispatcher so that the dispatcher can
tell clients how to connect to this worker. Defaults to
"localhost:%port%" , where %port% will be replaced with the port used
by the worker.
|
protocol
|
(Optional.) Specifies the protocol to be used by the server.
Acceptable values include "grpc", "grpc+local" . Defaults to "grpc" .
|
start
|
(Optional.) Boolean, indicating whether to start the server after
creating it. Defaults to True .
|
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
creating the TensorFlow server.
|
Methods
join
View source
join()
Blocks until the server has shut down.
This is useful when starting a dedicated worker process.
worker_server = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker_server.join()
This method currently blocks forever.
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
joining the server.
|
start
View source
start()
Starts this server.
Raises |
tf.errors.OpError
|
Or one of its subclasses if an error occurs while
starting the server.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.data.experimental.service.WorkerServer\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L145-L256) |\n\nAn in-process tf.data service worker server. \n\n tf.data.experimental.service.WorkerServer(\n port, dispatcher_address, worker_address=None, protocol=None, start=True\n )\n\nA [`tf.data.experimental.service.WorkerServer`](../../../../tf/data/experimental/service/WorkerServer) performs [`tf.data.Dataset`](../../../../tf/data/Dataset)\nprocessing for user-defined datasets, and provides the resulting elements over\nRPC. A worker is associated with a single\n[`tf.data.experimental.service.DispatchServer`](../../../../tf/data/experimental/service/DispatchServer). \n\n dispatcher = tf.data.experimental.service.DispatchServer(port=0)\n dispatcher_address = dispatcher.target.split(\"://\")[1]\n worker = tf.data.experimental.service.WorkerServer(\n port=0, dispatcher_address=dispatcher_address)\n dataset = tf.data.Dataset.range(10)\n dataset = dataset.apply(tf.data.experimental.service.distribute(\n processing_mode=\"parallel_epochs\", service=dispatcher.target))\n print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nWhen starting a dedicated tf.data worker process, use join() to block\nindefinitely after starting up the server. \n\n worker = tf.data.experimental.service.WorkerServer(\n port=5051, dispatcher_address=\"grpc://localhost:5050\")\n worker.join()\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `port` | Specifies the port to bind to. A value of 0 indicates that the worker can bind to any available port. |\n| `dispatcher_address` | Specifies the address of the dispatcher. |\n| `worker_address` | (Optional.) Specifies the address of the worker server. This address is passed to the dispatcher so that the dispatcher can tell clients how to connect to this worker. Defaults to `\"localhost:%port%\"`, where `%port%` will be replaced with the port used by the worker. |\n| `protocol` | (Optional.) Specifies the protocol to be used by the server. Acceptable values include `\"grpc\", \"grpc+local\"`. Defaults to `\"grpc\"`. |\n| `start` | (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to `True`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|-----------------------------------------------------------|-----------------------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while creating the TensorFlow server. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `join`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L219-L236) \n\n join()\n\nBlocks until the server has shut down.\n\nThis is useful when starting a dedicated worker process. \n\n worker_server = tf.data.experimental.service.WorkerServer(\n port=5051, dispatcher_address=\"grpc://localhost:5050\")\n worker_server.join()\n\nThis method currently blocks forever.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|-----------------------------------------------------------|-----------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while joining the server. |\n\n\u003cbr /\u003e\n\n### `start`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/data/experimental/service/server_lib.py#L210-L217) \n\n start()\n\nStarts this server.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|-----------------------------------------------------------|------------------------------------------------------------------------|\n| [`tf.errors.OpError`](/api_docs/python/tf/errors/OpError) | Or one of its subclasses if an error occurs while starting the server. |\n\n\u003cbr /\u003e"]]