GPU device plugins
Stay organized with collections
Save and categorize content based on your preferences.
TensorFlow's
pluggable device
architecture adds new device support as separate plug-in packages that are
installed alongside the official TensorFlow package.
The mechanism requires no device-specific changes in the TensorFlow code. It
relies on C APIs to communicate with the TensorFlow binary in a stable manner.
Plug-in developers maintain separate code repositories and distribution packages
for their plugins and are responsible for testing their devices.
Use device plugins
To use a particular device, like one would a native device in TensorFlow, users
only have to install the device plug-in package for that device. The following
code snippet shows how the plugin for a new demonstration device, Awesome
Processing Unit (APU), is installed and used. For simplicity, this sample APU
plug-in only has one custom kernel for ReLU:
# Install the APU example plug-in package
$ pip install tensorflow-apu-0.0.1-cp36-cp36m-linux_x86_64.whl
...
Successfully installed tensorflow-apu-0.0.1
With the plug-in installed, test that the device is visible and run an operation
on the new APU device:
import tensorflow as tf # TensorFlow registers PluggableDevices here.
tf.config.list_physical_devices() # APU device is visible to TensorFlow.
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:APU:0', device_type='APU')]
a = tf.random.normal(shape=[5], dtype=tf.float32) # Runs on CPU.
b = tf.nn.relu(a) # Runs on APU.
with tf.device("/APU:0"): # Users can also use 'with tf.device' syntax.
c = tf.nn.relu(a) # Runs on APU.
with tf.device("/CPU:0"):
c = tf.nn.relu(a) # Runs on CPU.
@tf.function # Defining a tf.function
def run():
d = tf.random.uniform(shape=[100], dtype=tf.float32) # Runs on CPU.
e = tf.nn.relu(d) # Runs on APU.
run() # PluggableDevices also work with tf.function and graph mode.
Available devices
Metal PluggableDevice
for macOS GPUs:
DirectML PluggableDevice
for Windows and WSL (preview):
Intel® Extension for TensorFlow PluggableDevice
for Linux and WSL:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-07-25 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-07-25 UTC."],[],[],null,["# GPU device plugins\n\n| **Note:** This page is for non-NVIDIA® GPU devices. For NVIDIA® GPU support, go to the [Install TensorFlow with pip](./pip) guide.\n\nTensorFlow's\n[pluggable device](https://github.com/tensorflow/community/blob/master/rfcs/20200624-pluggable-device-for-tensorflow.md)\narchitecture adds new device support as separate plug-in packages that are\ninstalled alongside the official TensorFlow package.\n\nThe mechanism requires no device-specific changes in the TensorFlow code. It\nrelies on C APIs to communicate with the TensorFlow binary in a stable manner.\nPlug-in developers maintain separate code repositories and distribution packages\nfor their plugins and are responsible for testing their devices.\n\nUse device plugins\n------------------\n\nTo use a particular device, like one would a native device in TensorFlow, users\nonly have to install the device plug-in package for that device. The following\ncode snippet shows how the plugin for a new demonstration device, *Awesome\nProcessing Unit (APU)*, is installed and used. For simplicity, this sample APU\nplug-in only has one custom kernel for ReLU: \n\n # Install the APU example plug-in package\n $ pip install tensorflow-apu-0.0.1-cp36-cp36m-linux_x86_64.whl\n ...\n Successfully installed tensorflow-apu-0.0.1\n\nWith the plug-in installed, test that the device is visible and run an operation\non the new APU device: \n\n import tensorflow as tf # TensorFlow registers PluggableDevices here.\n tf.config.list_physical_devices() # APU device is visible to TensorFlow.\n [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:APU:0', device_type='APU')]\n\n a = tf.random.normal(shape=[5], dtype=tf.float32) # Runs on CPU.\n b = tf.nn.relu(a) # Runs on APU.\n\n with tf.device(\"/APU:0\"): # Users can also use 'with tf.device' syntax.\n c = tf.nn.relu(a) # Runs on APU.\n\n with tf.device(\"/CPU:0\"):\n c = tf.nn.relu(a) # Runs on CPU.\n\n @tf.function # Defining a tf.function\n def run():\n d = tf.random.uniform(shape=[100], dtype=tf.float32) # Runs on CPU.\n e = tf.nn.relu(d) # Runs on APU.\n\n run() # PluggableDevices also work with tf.function and graph mode.\n\nAvailable devices\n-----------------\n\nMetal `PluggableDevice` for macOS GPUs:\n\n- Works with TF 2.5 or later.\n- [Getting started guide](https://developer.apple.com/metal/tensorflow-plugin/).\n- For questions and feedback, please visit the [Apple Developer Forum](https://developer.apple.com/forums/tags/tensorflow-metal).\n\nDirectML `PluggableDevice` for Windows and WSL (preview):\n\n- Works with `tensorflow-cpu` package, version 2.10 or later.\n- [PyPI wheel](https://pypi.org/project/tensorflow-directml-plugin/).\n- [GitHub repo](https://github.com/microsoft/tensorflow-directml-plugin).\n- For questions, feedback or to raise issues, please visit the [Issues page of `tensorflow-directml-plugin` on GitHub](https://github.com/microsoft/tensorflow-directml-plugin/issues).\n\nIntel® Extension for TensorFlow `PluggableDevice` for Linux and WSL:\n\n- Works with TF 2.10 or later.\n- [Getting started guide](https://intel.github.io/intel-extension-for-tensorflow/latest/get_started.html)\n- [PyPI wheel](https://pypi.org/project/intel-extension-for-tensorflow/).\n- [GitHub repo](https://github.com/intel/intel-extension-for-tensorflow).\n- For questions, feedback, or to raise issues, please visit the [Issues page of `intel-extension-for-tensorflow` on GitHub](https://github.com/intel/intel-extension-for-tensorflow/issues)."]]