flic
Stay organized with collections
Save and categorize content based on your preferences.
From the paper: We collected a 5003 image dataset automatically from popular
Hollywood movies. The images were obtained by running a state-of-the-art person
detector on every tenth frame of 30 movies. People detected with high confidence
(roughly 20K candidates) were then sent to the crowdsourcing marketplace Amazon
Mechanical Turk to obtain groundtruthlabeling. Each image was annotated by five
Turkers for $0.01 each to label 10 upperbody joints. The median-of-five labeling
was taken in each image to be robust to outlier annotation. Finally, images were
rejected manually by us if the person was occluded or severely non-frontal. We
set aside 20% (1016 images) of the data for testing.
Split |
Examples |
'test' |
1,016 |
'train' |
3,987 |
FeaturesDict({
'currframe': float64,
'image': Image(shape=(480, 720, 3), dtype=uint8),
'moviename': Text(shape=(), dtype=string),
'poselet_hit_idx': Sequence(uint16),
'torsobox': BBoxFeature(shape=(4,), dtype=float32),
'xcoords': Sequence(float64),
'ycoords': Sequence(float64),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
currframe |
Tensor |
|
float64 |
|
image |
Image |
(480, 720, 3) |
uint8 |
|
moviename |
Text |
|
string |
|
poselet_hit_idx |
Sequence(Tensor) |
(None,) |
uint16 |
|
torsobox |
BBoxFeature |
(4,) |
float32 |
|
xcoords |
Sequence(Tensor) |
(None,) |
float64 |
|
ycoords |
Sequence(Tensor) |
(None,) |
float64 |
|
@inproceedings{modec13,
title={MODEC: Multimodal Decomposable Models for Human Pose Estimation},
author={Sapp, Benjamin and Taskar, Ben},
booktitle={In Proc. CVPR},
year={2013},
}
flic/small (default config)
Config description: Uses 5003 examples used in CVPR13 MODEC paper.
Download size: 286.35 MiB
Figure
(tfds.show_examples):

flic/full
Config description: Uses 20928 examples, a superset of FLIC consisting
of more difficult examples.
Download size: 1.10 GiB
Figure
(tfds.show_examples):

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-01 UTC."],[],[],null,["# flic\n\n- **Description**:\n\nFrom the paper: We collected a 5003 image dataset automatically from popular\nHollywood movies. The images were obtained by running a state-of-the-art person\ndetector on every tenth frame of 30 movies. People detected with high confidence\n(roughly 20K candidates) were then sent to the crowdsourcing marketplace Amazon\nMechanical Turk to obtain groundtruthlabeling. Each image was annotated by five\nTurkers for $0.01 each to label 10 upperbody joints. The median-of-five labeling\nwas taken in each image to be robust to outlier annotation. Finally, images were\nrejected manually by us if the person was occluded or severely non-frontal. We\nset aside 20% (1016 images) of the data for testing.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/flic)\n\n- **Homepage** :\n \u003chttps://bensapp.github.io/flic-dataset.html\u003e\n\n- **Source code** :\n [`tfds.datasets.flic.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/flic/flic_dataset_builder.py)\n\n- **Versions**:\n\n - **`2.0.0`** (default): No release notes.\n- **Dataset size** : `317.94 MiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 1,016 |\n| `'train'` | 3,987 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'currframe': float64,\n 'image': Image(shape=(480, 720, 3), dtype=uint8),\n 'moviename': Text(shape=(), dtype=string),\n 'poselet_hit_idx': Sequence(uint16),\n 'torsobox': BBoxFeature(shape=(4,), dtype=float32),\n 'xcoords': Sequence(float64),\n 'ycoords': Sequence(float64),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------------|------------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| currframe | Tensor | | float64 | |\n| image | Image | (480, 720, 3) | uint8 | |\n| moviename | Text | | string | |\n| poselet_hit_idx | Sequence(Tensor) | (None,) | uint16 | |\n| torsobox | BBoxFeature | (4,) | float32 | |\n| xcoords | Sequence(Tensor) | (None,) | float64 | |\n| ycoords | Sequence(Tensor) | (None,) | float64 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Citation**:\n\n @inproceedings{modec13,\n title={MODEC: Multimodal Decomposable Models for Human Pose Estimation},\n author={Sapp, Benjamin and Taskar, Ben},\n booktitle={In Proc. CVPR},\n year={2013},\n }\n\nflic/small (default config)\n---------------------------\n\n- **Config description**: Uses 5003 examples used in CVPR13 MODEC paper.\n\n- **Download size** : `286.35 MiB`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nflic/full\n---------\n\n- **Config description**: Uses 20928 examples, a superset of FLIC consisting\n of more difficult examples.\n\n- **Download size** : `1.10 GiB`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]