natural_questions
Stay organized with collections
Save and categorize content based on your preferences.
The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets.
Split |
Examples |
'train' |
307,373 |
'validation' |
7,830 |
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
natural_questions/default (default config)
FeaturesDict({
'annotations': Sequence({
'id': string,
'long_answer': FeaturesDict({
'end_byte': int64,
'end_token': int64,
'start_byte': int64,
'start_token': int64,
}),
'short_answers': Sequence({
'end_byte': int64,
'end_token': int64,
'start_byte': int64,
'start_token': int64,
'text': Text(shape=(), dtype=string),
}),
'yes_no_answer': ClassLabel(shape=(), dtype=int64, num_classes=2),
}),
'document': FeaturesDict({
'html': Text(shape=(), dtype=string),
'title': Text(shape=(), dtype=string),
'tokens': Sequence({
'is_html': bool,
'token': Text(shape=(), dtype=string),
}),
'url': Text(shape=(), dtype=string),
}),
'id': string,
'question': FeaturesDict({
'text': Text(shape=(), dtype=string),
'tokens': Sequence(string),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
annotations |
Sequence |
|
|
|
annotations/id |
Tensor |
|
string |
|
annotations/long_answer |
FeaturesDict |
|
|
|
annotations/long_answer/end_byte |
Tensor |
|
int64 |
|
annotations/long_answer/end_token |
Tensor |
|
int64 |
|
annotations/long_answer/start_byte |
Tensor |
|
int64 |
|
annotations/long_answer/start_token |
Tensor |
|
int64 |
|
annotations/short_answers |
Sequence |
|
|
|
annotations/short_answers/end_byte |
Tensor |
|
int64 |
|
annotations/short_answers/end_token |
Tensor |
|
int64 |
|
annotations/short_answers/start_byte |
Tensor |
|
int64 |
|
annotations/short_answers/start_token |
Tensor |
|
int64 |
|
annotations/short_answers/text |
Text |
|
string |
|
annotations/yes_no_answer |
ClassLabel |
|
int64 |
|
document |
FeaturesDict |
|
|
|
document/html |
Text |
|
string |
|
document/title |
Text |
|
string |
|
document/tokens |
Sequence |
|
|
|
document/tokens/is_html |
Tensor |
|
bool |
|
document/tokens/token |
Text |
|
string |
|
document/url |
Text |
|
string |
|
id |
Tensor |
|
string |
|
question |
FeaturesDict |
|
|
|
question/text |
Text |
|
string |
|
question/tokens |
Sequence(Tensor) |
(None,) |
string |
|
natural_questions/longt5
FeaturesDict({
'all_answers': Sequence(Text(shape=(), dtype=string)),
'answer': Text(shape=(), dtype=string),
'context': Text(shape=(), dtype=string),
'id': Text(shape=(), dtype=string),
'question': Text(shape=(), dtype=string),
'title': Text(shape=(), dtype=string),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
all_answers |
Sequence(Text) |
(None,) |
string |
|
answer |
Text |
|
string |
|
context |
Text |
|
string |
|
id |
Text |
|
string |
|
question |
Text |
|
string |
|
title |
Text |
|
string |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-12-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2022-12-14 UTC."],[],[],null,["# natural_questions\n\n\u003cbr /\u003e\n\n- **Description**:\n\nThe NQ corpus contains questions from real users, and it requires QA systems to\nread and comprehend an entire Wikipedia article that may or may not contain the\nanswer to the question. The inclusion of real user questions, and the\nrequirement that solutions should read an entire page to find the answer, cause\nNQ to be a more realistic and challenging task than prior QA datasets.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/natural-questions)\n\n- **Homepage** :\n \u003chttps://ai.google.com/research/NaturalQuestions/dataset\u003e\n\n- **Source code** :\n [`tfds.datasets.natural_questions.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/natural_questions/natural_questions_dataset_builder.py)\n\n- **Versions**:\n\n - `0.0.2`: No release notes.\n - **`0.1.0`** (default): No release notes.\n- **Download size** : `41.97 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|----------|\n| `'train'` | 307,373 |\n| `'validation'` | 7,830 |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Citation**:\n\n @article{47761,\n title = {Natural Questions: a Benchmark for Question Answering Research},\n author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},\n year = {2019},\n journal = {Transactions of the Association of Computational Linguistics}\n }\n\nnatural_questions/default (default config)\n------------------------------------------\n\n- **Config description**: Default natural_questions config\n\n- **Dataset size** : `90.26 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'annotations': Sequence({\n 'id': string,\n 'long_answer': FeaturesDict({\n 'end_byte': int64,\n 'end_token': int64,\n 'start_byte': int64,\n 'start_token': int64,\n }),\n 'short_answers': Sequence({\n 'end_byte': int64,\n 'end_token': int64,\n 'start_byte': int64,\n 'start_token': int64,\n 'text': Text(shape=(), dtype=string),\n }),\n 'yes_no_answer': ClassLabel(shape=(), dtype=int64, num_classes=2),\n }),\n 'document': FeaturesDict({\n 'html': Text(shape=(), dtype=string),\n 'title': Text(shape=(), dtype=string),\n 'tokens': Sequence({\n 'is_html': bool,\n 'token': Text(shape=(), dtype=string),\n }),\n 'url': Text(shape=(), dtype=string),\n }),\n 'id': string,\n 'question': FeaturesDict({\n 'text': Text(shape=(), dtype=string),\n 'tokens': Sequence(string),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------------------------------------|------------------|---------|--------|-------------|\n| | FeaturesDict | | | |\n| annotations | Sequence | | | |\n| annotations/id | Tensor | | string | |\n| annotations/long_answer | FeaturesDict | | | |\n| annotations/long_answer/end_byte | Tensor | | int64 | |\n| annotations/long_answer/end_token | Tensor | | int64 | |\n| annotations/long_answer/start_byte | Tensor | | int64 | |\n| annotations/long_answer/start_token | Tensor | | int64 | |\n| annotations/short_answers | Sequence | | | |\n| annotations/short_answers/end_byte | Tensor | | int64 | |\n| annotations/short_answers/end_token | Tensor | | int64 | |\n| annotations/short_answers/start_byte | Tensor | | int64 | |\n| annotations/short_answers/start_token | Tensor | | int64 | |\n| annotations/short_answers/text | Text | | string | |\n| annotations/yes_no_answer | ClassLabel | | int64 | |\n| document | FeaturesDict | | | |\n| document/html | Text | | string | |\n| document/title | Text | | string | |\n| document/tokens | Sequence | | | |\n| document/tokens/is_html | Tensor | | bool | |\n| document/tokens/token | Text | | string | |\n| document/url | Text | | string | |\n| id | Tensor | | string | |\n| question | FeaturesDict | | | |\n| question/text | Text | | string | |\n| question/tokens | Sequence(Tensor) | (None,) | string | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nnatural_questions/longt5\n------------------------\n\n- **Config description**: natural_questions preprocessed as in the longT5\n benchmark\n\n- **Dataset size** : `8.91 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'all_answers': Sequence(Text(shape=(), dtype=string)),\n 'answer': Text(shape=(), dtype=string),\n 'context': Text(shape=(), dtype=string),\n 'id': Text(shape=(), dtype=string),\n 'question': Text(shape=(), dtype=string),\n 'title': Text(shape=(), dtype=string),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-------------|----------------|---------|--------|-------------|\n| | FeaturesDict | | | |\n| all_answers | Sequence(Text) | (None,) | string | |\n| answer | Text | | string | |\n| context | Text | | string | |\n| id | Text | | string | |\n| question | Text | | string | |\n| title | Text | | string | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]