Module: tf.keras.applications.mobilenet
Stay organized with collections
Save and categorize content based on your preferences.
MobileNet v1 models for Keras.
MobileNet is a general architecture and can be used for multiple use cases.
Depending on the use case, it can use different input layer size and
different width factors. This allows different width models to reduce
the number of multiply-adds and thereby
reduce inference cost on mobile devices.
MobileNets support any input size greater than 32 x 32, with larger image sizes
offering better performance.
The number of parameters and number of multiply-adds
can be modified by using the alpha
parameter,
which increases/decreases the number of filters in each layer.
By altering the image size and alpha
parameter,
all 16 models from the paper can be built, with ImageNet weights provided.
The paper demonstrates the performance of MobileNets using alpha
values of
1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25.
For each of these alpha
values, weights for 4 different input image sizes
are provided (224, 192, 160, 128).
The following table describes the size and accuracy of the 100% MobileNet
on size 224 x 224:
Width Multiplier (alpha) |
ImageNet Acc |
Multiply-Adds (M) |
Params (M) |
1.0 MobileNet-224 |
70.6 % |
529 |
4.2 |
0.75 MobileNet-224 |
68.4 % |
325 |
2.6 |
0.50 MobileNet-224 |
63.7 % |
149 |
1.3 |
0.25 MobileNet-224 |
50.6 % |
41 |
0.5 |
The following table describes the performance of
Resolution |
ImageNet Acc |
Multiply-Adds (M) |
Params (M) |
1.0 MobileNet-224 |
70.6 % |
569 |
4.2 |
1.0 MobileNet-192 |
69.1 % |
418 |
4.2 |
1.0 MobileNet-160 |
67.2 % |
290 |
4.2 |
1.0 MobileNet-128 |
64.4 % |
186 |
4.2 |
Reference:
Functions
MobileNet(...)
: Instantiates the MobileNet architecture.
decode_predictions(...)
: Decodes the prediction of an ImageNet model.
preprocess_input(...)
: Preprocesses a tensor or Numpy array encoding a batch of images.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2022-10-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2022-10-28 UTC."],[],[],null,["# Module: tf.keras.applications.mobilenet\n\n\u003cbr /\u003e\n\nMobileNet v1 models for Keras.\n\nMobileNet is a general architecture and can be used for multiple use cases.\nDepending on the use case, it can use different input layer size and\ndifferent width factors. This allows different width models to reduce\nthe number of multiply-adds and thereby\nreduce inference cost on mobile devices.\n\nMobileNets support any input size greater than 32 x 32, with larger image sizes\noffering better performance.\nThe number of parameters and number of multiply-adds\ncan be modified by using the `alpha` parameter,\nwhich increases/decreases the number of filters in each layer.\nBy altering the image size and `alpha` parameter,\nall 16 models from the paper can be built, with ImageNet weights provided.\n\nThe paper demonstrates the performance of MobileNets using `alpha` values of\n1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25.\nFor each of these `alpha` values, weights for 4 different input image sizes\nare provided (224, 192, 160, 128).\n\nThe following table describes the size and accuracy of the 100% MobileNet\n\non size 224 x 224:\n------------------\n\n| Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M) |\n|--------------------------|--------------|-------------------|------------|\n| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 |\n| 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 |\n| 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 |\n| 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 |\n\nThe following table describes the performance of\n\nthe 100 % MobileNet on various input sizes:\n-------------------------------------------\n\n| Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M) |\n|-------------------|--------------|-------------------|------------|\n| 1.0 MobileNet-224 | 70.6 % | 569 | 4.2 |\n| 1.0 MobileNet-192 | 69.1 % | 418 | 4.2 |\n| 1.0 MobileNet-160 | 67.2 % | 290 | 4.2 |\n| 1.0 MobileNet-128 | 64.4 % | 186 | 4.2 |\n\n#### Reference:\n\n- [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)\n\nFunctions\n---------\n\n[`MobileNet(...)`](../../../tf/keras/applications/mobilenet/MobileNet): Instantiates the MobileNet architecture.\n\n[`decode_predictions(...)`](../../../tf/keras/applications/mobilenet/decode_predictions): Decodes the prediction of an ImageNet model.\n\n[`preprocess_input(...)`](../../../tf/keras/applications/mobilenet/preprocess_input): Preprocesses a tensor or Numpy array encoding a batch of images."]]