open_images_v4
Stay organized with collections
Save and categorize content based on your preferences.
Open Images is a dataset of ~9M images that have been annotated with image-level
labels and object bounding boxes.
The training set of V4 contains 14.6M bounding boxes for 600 object classes on
1.74M images, making it the largest existing dataset with object location
annotations. The boxes have been largely manually drawn by professional
annotators to ensure accuracy and consistency. The images are very diverse and
often contain complex scenes with several objects (8.4 per image on average).
Moreover, the dataset is annotated with image-level labels spanning thousands of
classes.
Split |
Examples |
'test' |
125,436 |
'train' |
1,743,042 |
'validation' |
41,620 |
FeaturesDict({
'bobjects': Sequence({
'bbox': BBoxFeature(shape=(4,), dtype=float32),
'is_depiction': int8,
'is_group_of': int8,
'is_inside': int8,
'is_occluded': int8,
'is_truncated': int8,
'label': ClassLabel(shape=(), dtype=int64, num_classes=601),
'source': ClassLabel(shape=(), dtype=int64, num_classes=6),
}),
'image': Image(shape=(None, None, 3), dtype=uint8),
'image/filename': Text(shape=(), dtype=string),
'objects': Sequence({
'confidence': int32,
'label': ClassLabel(shape=(), dtype=int64, num_classes=19995),
'source': ClassLabel(shape=(), dtype=int64, num_classes=6),
}),
'objects_trainable': Sequence({
'confidence': int32,
'label': ClassLabel(shape=(), dtype=int64, num_classes=7186),
'source': ClassLabel(shape=(), dtype=int64, num_classes=6),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
bobjects |
Sequence |
|
|
|
bobjects/bbox |
BBoxFeature |
(4,) |
float32 |
|
bobjects/is_depiction |
Tensor |
|
int8 |
|
bobjects/is_group_of |
Tensor |
|
int8 |
|
bobjects/is_inside |
Tensor |
|
int8 |
|
bobjects/is_occluded |
Tensor |
|
int8 |
|
bobjects/is_truncated |
Tensor |
|
int8 |
|
bobjects/label |
ClassLabel |
|
int64 |
|
bobjects/source |
ClassLabel |
|
int64 |
|
image
|
Image
|
(None,
None,
3) |
uint8
|
|
image/filename |
Text |
|
string |
|
objects |
Sequence |
|
|
|
objects/confidence |
Tensor |
|
int32 |
|
objects/label |
ClassLabel |
|
int64 |
|
objects/source |
ClassLabel |
|
int64 |
|
objects_trainable |
Sequence |
|
|
|
objects_trainable/confidence |
Tensor |
|
int32 |
|
objects_trainable/label |
ClassLabel |
|
int64 |
|
objects_trainable/source |
ClassLabel |
|
int64 |
|
@article{OpenImages,
author = {Alina Kuznetsova and
Hassan Rom and
Neil Alldrin and
Jasper Uijlings and
Ivan Krasin and
Jordi Pont-Tuset and
Shahab Kamali and
Stefan Popov and
Matteo Malloci and
Tom Duerig and
Vittorio Ferrari},
title = {The Open Images Dataset V4: Unified image classification,
object detection, and visual relationship detection at scale},
year = {2018},
journal = {arXiv:1811.00982}
}
@article{OpenImages2,
author = {Krasin, Ivan and
Duerig, Tom and
Alldrin, Neil and
Ferrari, Vittorio
and Abu-El-Haija, Sami and
Kuznetsova, Alina and
Rom, Hassan and
Uijlings, Jasper and
Popov, Stefan and
Kamali, Shahab and
Malloci, Matteo and
Pont-Tuset, Jordi and
Veit, Andreas and
Belongie, Serge and
Gomes, Victor and
Gupta, Abhinav and
Sun, Chen and
Chechik, Gal and
Cai, David and
Feng, Zheyun and
Narayanan, Dhyanesh and
Murphy, Kevin},
title = {OpenImages: A public dataset for large-scale multi-label and
multi-class image classification.},
journal = {Dataset available from
https://storage.googleapis.com/openimages/web/index.html},
year={2017}
}
open_images_v4/original (default config)

open_images_v4/300k
Config description: Images have roughly 300,000 pixels, at 72 JPEG
quality.
Dataset size: 81.92 GiB
Figure
(tfds.show_examples):

open_images_v4/200k
Config description: Images have roughly 200,000 pixels, at 72 JPEG
quality.
Dataset size: 60.70 GiB
Figure
(tfds.show_examples):

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-01 UTC."],[],[],null,["# open_images_v4\n\n\u003cbr /\u003e\n\n- **Description**:\n\nOpen Images is a dataset of \\~9M images that have been annotated with image-level\nlabels and object bounding boxes.\n\nThe training set of V4 contains 14.6M bounding boxes for 600 object classes on\n1.74M images, making it the largest existing dataset with object location\nannotations. The boxes have been largely manually drawn by professional\nannotators to ensure accuracy and consistency. The images are very diverse and\noften contain complex scenes with several objects (8.4 per image on average).\nMoreover, the dataset is annotated with image-level labels spanning thousands of\nclasses.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/open-images-v4)\n\n- **Homepage** :\n \u003chttps://storage.googleapis.com/openimages/web/index.html\u003e\n\n- **Source code** :\n [`tfds.datasets.open_images_v4.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/open_images_v4/open_images_v4_dataset_builder.py)\n\n- **Versions**:\n\n - **`2.0.0`** (default): New split API (\u003chttps://tensorflow.org/datasets/splits\u003e)\n- **Download size** : `565.11 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|-----------|\n| `'test'` | 125,436 |\n| `'train'` | 1,743,042 |\n| `'validation'` | 41,620 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'bobjects': Sequence({\n 'bbox': BBoxFeature(shape=(4,), dtype=float32),\n 'is_depiction': int8,\n 'is_group_of': int8,\n 'is_inside': int8,\n 'is_occluded': int8,\n 'is_truncated': int8,\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=601),\n 'source': ClassLabel(shape=(), dtype=int64, num_classes=6),\n }),\n 'image': Image(shape=(None, None, 3), dtype=uint8),\n 'image/filename': Text(shape=(), dtype=string),\n 'objects': Sequence({\n 'confidence': int32,\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=19995),\n 'source': ClassLabel(shape=(), dtype=int64, num_classes=6),\n }),\n 'objects_trainable': Sequence({\n 'confidence': int32,\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=7186),\n 'source': ClassLabel(shape=(), dtype=int64, num_classes=6),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|------------------------------|--------------|-----------------|---------|-------------|\n| | FeaturesDict | | | |\n| bobjects | Sequence | | | |\n| bobjects/bbox | BBoxFeature | (4,) | float32 | |\n| bobjects/is_depiction | Tensor | | int8 | |\n| bobjects/is_group_of | Tensor | | int8 | |\n| bobjects/is_inside | Tensor | | int8 | |\n| bobjects/is_occluded | Tensor | | int8 | |\n| bobjects/is_truncated | Tensor | | int8 | |\n| bobjects/label | ClassLabel | | int64 | |\n| bobjects/source | ClassLabel | | int64 | |\n| image | Image | (None, None, 3) | uint8 | |\n| image/filename | Text | | string | |\n| objects | Sequence | | | |\n| objects/confidence | Tensor | | int32 | |\n| objects/label | ClassLabel | | int64 | |\n| objects/source | ClassLabel | | int64 | |\n| objects_trainable | Sequence | | | |\n| objects_trainable/confidence | Tensor | | int32 | |\n| objects_trainable/label | ClassLabel | | int64 | |\n| objects_trainable/source | ClassLabel | | int64 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Citation**:\n\n @article{OpenImages,\n author = {Alina Kuznetsova and\n Hassan Rom and\n Neil Alldrin and\n Jasper Uijlings and\n Ivan Krasin and\n Jordi Pont-Tuset and\n Shahab Kamali and\n Stefan Popov and\n Matteo Malloci and\n Tom Duerig and\n Vittorio Ferrari},\n title = {The Open Images Dataset V4: Unified image classification,\n object detection, and visual relationship detection at scale},\n year = {2018},\n journal = {arXiv:1811.00982}\n }\n @article{OpenImages2,\n author = {Krasin, Ivan and\n Duerig, Tom and\n Alldrin, Neil and\n Ferrari, Vittorio\n and Abu-El-Haija, Sami and\n Kuznetsova, Alina and\n Rom, Hassan and\n Uijlings, Jasper and\n Popov, Stefan and\n Kamali, Shahab and\n Malloci, Matteo and\n Pont-Tuset, Jordi and\n Veit, Andreas and\n Belongie, Serge and\n Gomes, Victor and\n Gupta, Abhinav and\n Sun, Chen and\n Chechik, Gal and\n Cai, David and\n Feng, Zheyun and\n Narayanan, Dhyanesh and\n Murphy, Kevin},\n title = {OpenImages: A public dataset for large-scale multi-label and\n multi-class image classification.},\n journal = {Dataset available from\n https://storage.googleapis.com/openimages/web/index.html},\n year={2017}\n }\n\nopen_images_v4/original (default config)\n----------------------------------------\n\n- **Config description**: Images at their original resolution and quality.\n\n- **Dataset size** : `562.42 GiB`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nopen_images_v4/300k\n-------------------\n\n- **Config description**: Images have roughly 300,000 pixels, at 72 JPEG\n quality.\n\n- **Dataset size** : `81.92 GiB`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nopen_images_v4/200k\n-------------------\n\n- **Config description**: Images have roughly 200,000 pixels, at 72 JPEG\n quality.\n\n- **Dataset size** : `60.70 GiB`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]