tff.simulation.baselines.emnist.create_autoencoder_task
Stay organized with collections
Save and categorize content based on your preferences.
Creates a baseline task for autoencoding on EMNIST.
tff.simulation.baselines.emnist.create_autoencoder_task(
train_client_spec: tff.simulation.baselines.ClientSpec
,
eval_client_spec: Optional[tff.simulation.baselines.ClientSpec
] = None,
only_digits: bool = False,
cache_dir: Optional[str] = None,
use_synthetic_data: bool = False
) -> tff.simulation.baselines.BaselineTask
This task involves performing autoencoding on the EMNIST dataset using a
densely connected bottleneck network. The model uses 8 layers of widths
[1000, 500, 250, 30, 250, 500, 1000, 784]
, with the final layer being the
output layer. Each layer uses a sigmoid activation function, except the
smallest layer, which uses a linear activation function.
The goal of the task is to minimize the mean squared error between the input
to the network and the output of the network.
Args |
train_client_spec
|
A tff.simulation.baselines.ClientSpec specifying how to
preprocess train client data.
|
eval_client_spec
|
An optional tff.simulation.baselines.ClientSpec
specifying how to preprocess evaluation client data. If set to None , the
evaluation datasets will use a batch size of 64 with no extra
preprocessing.
|
only_digits
|
A boolean indicating whether to use the full EMNIST-62 dataset
containing 62 alphanumeric classes (True ) or the smaller EMNIST-10
dataset with only 10 numeric classes (False ).
|
cache_dir
|
An optional directory to cache the downloadeded datasets. If
None , they will be cached to ~/.tff/ .
|
use_synthetic_data
|
A boolean indicating whether to use synthetic EMNIST
data. This option should only be used for testing purposes, in order to
avoid downloading the entire EMNIST dataset.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-20 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-20 UTC."],[],[],null,["# tff.simulation.baselines.emnist.create_autoencoder_task\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/federated/blob/v0.87.0 Version 2.0, January 2004 Licensed under the Apache License, Version 2.0 (the) |\n\nCreates a baseline task for autoencoding on EMNIST. \n\n tff.simulation.baselines.emnist.create_autoencoder_task(\n train_client_spec: ../../../../tff/simulation/baselines/ClientSpec,\n eval_client_spec: Optional[../../../../tff/simulation/baselines/ClientSpec] = None,\n only_digits: bool = False,\n cache_dir: Optional[str] = None,\n use_synthetic_data: bool = False\n ) -\u003e ../../../../tff/simulation/baselines/BaselineTask\n\nThis task involves performing autoencoding on the EMNIST dataset using a\ndensely connected bottleneck network. The model uses 8 layers of widths\n`[1000, 500, 250, 30, 250, 500, 1000, 784]`, with the final layer being the\noutput layer. Each layer uses a sigmoid activation function, except the\nsmallest layer, which uses a linear activation function.\n\nThe goal of the task is to minimize the mean squared error between the input\nto the network and the output of the network.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `train_client_spec` | A [`tff.simulation.baselines.ClientSpec`](../../../../tff/simulation/baselines/ClientSpec) specifying how to preprocess train client data. |\n| `eval_client_spec` | An optional [`tff.simulation.baselines.ClientSpec`](../../../../tff/simulation/baselines/ClientSpec) specifying how to preprocess evaluation client data. If set to `None`, the evaluation datasets will use a batch size of 64 with no extra preprocessing. |\n| `only_digits` | A boolean indicating whether to use the full EMNIST-62 dataset containing 62 alphanumeric classes (`True`) or the smaller EMNIST-10 dataset with only 10 numeric classes (`False`). |\n| `cache_dir` | An optional directory to cache the downloadeded datasets. If `None`, they will be cached to `~/.tff/`. |\n| `use_synthetic_data` | A boolean indicating whether to use synthetic EMNIST data. This option should only be used for testing purposes, in order to avoid downloading the entire EMNIST dataset. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A [`tff.simulation.baselines.BaselineTask`](../../../../tff/simulation/baselines/BaselineTask). ||\n\n\u003cbr /\u003e"]]