TrishEdith parquet-converter commited on
Commit
fa78e53
·
verified ·
0 Parent(s):

Duplicate from fever/fever

Browse files

Co-authored-by: Parquet-converter (BOT) <[email protected]>

Files changed (3) hide show
  1. .gitattributes +27 -0
  2. README.md +353 -0
  3. fever.py +218 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ paperswithcode_id: fever
5
+ annotations_creators:
6
+ - crowdsourced
7
+ language_creators:
8
+ - found
9
+ license:
10
+ - cc-by-sa-3.0
11
+ - gpl-3.0
12
+ multilinguality:
13
+ - monolingual
14
+ pretty_name: FEVER
15
+ size_categories:
16
+ - 100K<n<1M
17
+ source_datasets:
18
+ - extended|wikipedia
19
+ task_categories:
20
+ - text-classification
21
+ task_ids: []
22
+ tags:
23
+ - knowledge-verification
24
+ dataset_info:
25
+ - config_name: v1.0
26
+ features:
27
+ - name: id
28
+ dtype: int32
29
+ - name: label
30
+ dtype: string
31
+ - name: claim
32
+ dtype: string
33
+ - name: evidence_annotation_id
34
+ dtype: int32
35
+ - name: evidence_id
36
+ dtype: int32
37
+ - name: evidence_wiki_url
38
+ dtype: string
39
+ - name: evidence_sentence_id
40
+ dtype: int32
41
+ splits:
42
+ - name: train
43
+ num_bytes: 29591412
44
+ num_examples: 311431
45
+ - name: labelled_dev
46
+ num_bytes: 3643157
47
+ num_examples: 37566
48
+ - name: unlabelled_dev
49
+ num_bytes: 1548965
50
+ num_examples: 19998
51
+ - name: unlabelled_test
52
+ num_bytes: 1617002
53
+ num_examples: 19998
54
+ - name: paper_dev
55
+ num_bytes: 1821489
56
+ num_examples: 18999
57
+ - name: paper_test
58
+ num_bytes: 1821668
59
+ num_examples: 18567
60
+ download_size: 44853972
61
+ dataset_size: 40043693
62
+ - config_name: v2.0
63
+ features:
64
+ - name: id
65
+ dtype: int32
66
+ - name: label
67
+ dtype: string
68
+ - name: claim
69
+ dtype: string
70
+ - name: evidence_annotation_id
71
+ dtype: int32
72
+ - name: evidence_id
73
+ dtype: int32
74
+ - name: evidence_wiki_url
75
+ dtype: string
76
+ - name: evidence_sentence_id
77
+ dtype: int32
78
+ splits:
79
+ - name: validation
80
+ num_bytes: 306243
81
+ num_examples: 2384
82
+ download_size: 392466
83
+ dataset_size: 306243
84
+ - config_name: wiki_pages
85
+ features:
86
+ - name: id
87
+ dtype: string
88
+ - name: text
89
+ dtype: string
90
+ - name: lines
91
+ dtype: string
92
+ splits:
93
+ - name: wikipedia_pages
94
+ num_bytes: 7254115038
95
+ num_examples: 5416537
96
+ download_size: 1713485474
97
+ dataset_size: 7254115038
98
+ ---
99
+
100
+ # Dataset Card for "fever"
101
+
102
+ ## Table of Contents
103
+ - [Dataset Description](#dataset-description)
104
+ - [Dataset Summary](#dataset-summary)
105
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
106
+ - [Languages](#languages)
107
+ - [Dataset Structure](#dataset-structure)
108
+ - [Data Instances](#data-instances)
109
+ - [Data Fields](#data-fields)
110
+ - [Data Splits](#data-splits)
111
+ - [Dataset Creation](#dataset-creation)
112
+ - [Curation Rationale](#curation-rationale)
113
+ - [Source Data](#source-data)
114
+ - [Annotations](#annotations)
115
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
116
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
117
+ - [Social Impact of Dataset](#social-impact-of-dataset)
118
+ - [Discussion of Biases](#discussion-of-biases)
119
+ - [Other Known Limitations](#other-known-limitations)
120
+ - [Additional Information](#additional-information)
121
+ - [Dataset Curators](#dataset-curators)
122
+ - [Licensing Information](#licensing-information)
123
+ - [Citation Information](#citation-information)
124
+ - [Contributions](#contributions)
125
+
126
+ ## Dataset Description
127
+
128
+ - **Homepage:** [https://fever.ai/](https://fever.ai/)
129
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
131
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
+
133
+ ### Dataset Summary
134
+
135
+ With billions of individual pages on the web providing information on almost every conceivable topic, we should have
136
+ the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
137
+ information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
138
+ transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
139
+ of recent research and media coverage: false information coming from unreliable sources.
140
+
141
+ The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
142
+
143
+ - FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
144
+ extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
145
+ are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
146
+ sentence(s) forming the necessary evidence for their judgment.
147
+
148
+ - FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
149
+ participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
150
+ adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
151
+ 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
152
+ novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
153
+ The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
154
+ annotation guidelines requirements).
155
+
156
+ ### Supported Tasks and Leaderboards
157
+
158
+ The task is verification of textual claims against textual sources.
159
+
160
+ When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
161
+ passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
162
+ verification systems it is retrieved from a large set of documents in order to form the evidence.
163
+
164
+ ### Languages
165
+
166
+ The dataset is in English.
167
+
168
+ ## Dataset Structure
169
+
170
+ ### Data Instances
171
+
172
+ #### v1.0
173
+
174
+ - **Size of downloaded dataset files:** 44.86 MB
175
+ - **Size of the generated dataset:** 40.05 MB
176
+ - **Total amount of disk used:** 84.89 MB
177
+
178
+ An example of 'train' looks as follows.
179
+ ```
180
+ 'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
181
+ 'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
182
+ 'label': 'SUPPORTS',
183
+ 'id': 75397,
184
+ 'evidence_id': 104971,
185
+ 'evidence_sentence_id': 7,
186
+ 'evidence_annotation_id': 92206}
187
+ ```
188
+
189
+ #### v2.0
190
+
191
+ - **Size of downloaded dataset files:** 0.39 MB
192
+ - **Size of the generated dataset:** 0.30 MB
193
+ - **Total amount of disk used:** 0.70 MB
194
+
195
+ An example of 'validation' looks as follows.
196
+ ```
197
+ {'claim': "There is a convicted statutory rapist called Chinatown's writer.",
198
+ 'evidence_wiki_url': '',
199
+ 'label': 'NOT ENOUGH INFO',
200
+ 'id': 500000,
201
+ 'evidence_id': -1,
202
+ 'evidence_sentence_id': -1,
203
+ 'evidence_annotation_id': 269158}
204
+ ```
205
+
206
+ #### wiki_pages
207
+
208
+ - **Size of downloaded dataset files:** 1.71 GB
209
+ - **Size of the generated dataset:** 7.25 GB
210
+ - **Total amount of disk used:** 8.97 GB
211
+
212
+ An example of 'wikipedia_pages' looks as follows.
213
+ ```
214
+ {'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
215
+ 'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
216
+ 'id': '1928_in_association_football'}
217
+ ```
218
+
219
+ ### Data Fields
220
+
221
+ The data fields are the same among all splits.
222
+
223
+ #### v1.0
224
+
225
+ - `id`: a `int32` feature.
226
+ - `label`: a `string` feature.
227
+ - `claim`: a `string` feature.
228
+ - `evidence_annotation_id`: a `int32` feature.
229
+ - `evidence_id`: a `int32` feature.
230
+ - `evidence_wiki_url`: a `string` feature.
231
+ - `evidence_sentence_id`: a `int32` feature.
232
+
233
+ #### v2.0
234
+
235
+ - `id`: a `int32` feature.
236
+ - `label`: a `string` feature.
237
+ - `claim`: a `string` feature.
238
+ - `evidence_annotation_id`: a `int32` feature.
239
+ - `evidence_id`: a `int32` feature.
240
+ - `evidence_wiki_url`: a `string` feature.
241
+ - `evidence_sentence_id`: a `int32` feature.
242
+
243
+ #### wiki_pages
244
+
245
+ - `id`: a `string` feature.
246
+ - `text`: a `string` feature.
247
+ - `lines`: a `string` feature.
248
+
249
+ ### Data Splits
250
+
251
+ #### v1.0
252
+
253
+ | | train | unlabelled_dev | labelled_dev | paper_dev | unlabelled_test | paper_test |
254
+ |------|-------:|---------------:|-------------:|----------:|----------------:|-----------:|
255
+ | v1.0 | 311431 | 19998 | 37566 | 18999 | 19998 | 18567 |
256
+
257
+ #### v2.0
258
+
259
+ | | validation |
260
+ |------|-----------:|
261
+ | v2.0 | 2384 |
262
+
263
+ #### wiki_pages
264
+
265
+ | | wikipedia_pages |
266
+ |------------|----------------:|
267
+ | wiki_pages | 5416537 |
268
+
269
+ ## Dataset Creation
270
+
271
+ ### Curation Rationale
272
+
273
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
274
+
275
+ ### Source Data
276
+
277
+ #### Initial Data Collection and Normalization
278
+
279
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
280
+
281
+ #### Who are the source language producers?
282
+
283
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
+
285
+ ### Annotations
286
+
287
+ #### Annotation process
288
+
289
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
+
291
+ #### Who are the annotators?
292
+
293
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
294
+
295
+ ### Personal and Sensitive Information
296
+
297
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
298
+
299
+ ## Considerations for Using the Data
300
+
301
+ ### Social Impact of Dataset
302
+
303
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
304
+
305
+ ### Discussion of Biases
306
+
307
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
308
+
309
+ ### Other Known Limitations
310
+
311
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
312
+
313
+ ## Additional Information
314
+
315
+ ### Dataset Curators
316
+
317
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
318
+
319
+ ### Licensing Information
320
+
321
+ FEVER license:
322
+
323
+ ```
324
+ These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Terms”). You may not use these files except in compliance with the applicable License Terms.
325
+ ```
326
+
327
+ ### Citation Information
328
+
329
+ If you use "FEVER Dataset", please cite:
330
+ ```bibtex
331
+ @inproceedings{Thorne18Fever,
332
+ author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
333
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
334
+ booktitle = {NAACL-HLT},
335
+ year = {2018}
336
+ }
337
+ ```
338
+
339
+ If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
340
+ ```bibtex
341
+ @inproceedings{Thorne19FEVER2,
342
+ author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
343
+ title = {The {FEVER2.0} Shared Task},
344
+ booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
345
+ year = {2018}
346
+ }
347
+ ```
348
+
349
+ ### Contributions
350
+
351
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
352
+ [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
353
+ [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
fever.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """FEVER dataset."""
18
+
19
+ import json
20
+ import os
21
+ import textwrap
22
+
23
+ import datasets
24
+
25
+
26
+ class FeverConfig(datasets.BuilderConfig):
27
+ """BuilderConfig for FEVER."""
28
+
29
+ def __init__(self, homepage: str = None, citation: str = None, base_url: str = None, urls: dict = None, **kwargs):
30
+ """BuilderConfig for FEVER.
31
+
32
+ Args:
33
+ homepage (`str`): Homepage.
34
+ citation (`str`): Citation reference.
35
+ base_url (`str`): Data base URL that precedes all data URLs.
36
+ urls (`dict`): Data URLs (each URL will pe preceded by `base_url`).
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super().__init__(**kwargs)
40
+ self.homepage = homepage
41
+ self.citation = citation
42
+ self.base_url = base_url
43
+ self.urls = {key: f"{base_url}/{url}" for key, url in urls.items()}
44
+
45
+
46
+ class Fever(datasets.GeneratorBasedBuilder):
47
+ """Fact Extraction and VERification Dataset."""
48
+
49
+ BUILDER_CONFIGS = [
50
+ FeverConfig(
51
+ name="v1.0",
52
+ version=datasets.Version("1.0.0"),
53
+ description=textwrap.dedent(
54
+ "FEVER v1.0\n"
55
+ "FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences "
56
+ "extracted from Wikipedia and subsequently verified without knowledge of the sentence they were "
57
+ "derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two "
58
+ "classes, the annotators also recorded the sentence(s) forming the necessary evidence for their "
59
+ "judgment."
60
+ ),
61
+ homepage="https://fever.ai/dataset/fever.html",
62
+ citation=textwrap.dedent(
63
+ """\
64
+ @inproceedings{Thorne18Fever,
65
+ author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
66
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
67
+ booktitle = {NAACL-HLT},
68
+ year = {2018}
69
+ }"""
70
+ ),
71
+ base_url="https://fever.ai/download/fever",
72
+ urls={
73
+ datasets.Split.TRAIN: "train.jsonl",
74
+ "labelled_dev": "shared_task_dev.jsonl",
75
+ "unlabelled_dev": "shared_task_dev_public.jsonl",
76
+ "unlabelled_test": "shared_task_test.jsonl",
77
+ "paper_dev": "paper_dev.jsonl",
78
+ "paper_test": "paper_test.jsonl",
79
+ },
80
+ ),
81
+ FeverConfig(
82
+ name="v2.0",
83
+ version=datasets.Version("2.0.0"),
84
+ description=textwrap.dedent(
85
+ "FEVER v2.0:\n"
86
+ "The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the "
87
+ "Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating "
88
+ "adversarial examples that induce classification errors for the existing systems. Breakers submitted "
89
+ "a dataset of up to 1000 instances with equal number of instances for each of the three classes "
90
+ "(Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER "
91
+ "dataset) were considered as valid entries to the shared task. The submissions were then manually "
92
+ "evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation "
93
+ "guidelines requirements)."
94
+ ),
95
+ homepage="https://fever.ai/dataset/adversarial.html",
96
+ citation=textwrap.dedent(
97
+ """\
98
+ @inproceedings{Thorne19FEVER2,
99
+ author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
100
+ title = {The {FEVER2.0} Shared Task},
101
+ booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
102
+ year = {2018}
103
+ }"""
104
+ ),
105
+ base_url="https://fever.ai/download/fever2.0",
106
+ urls={
107
+ datasets.Split.VALIDATION: "fever2-fixers-dev.jsonl",
108
+ },
109
+ ),
110
+ FeverConfig(
111
+ name="wiki_pages",
112
+ version=datasets.Version("1.0.0"),
113
+ description=textwrap.dedent(
114
+ "Wikipedia pages for FEVER v1.0:\n"
115
+ "FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences "
116
+ "extracted from Wikipedia and subsequently verified without knowledge of the sentence they were "
117
+ "derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two "
118
+ "classes, the annotators also recorded the sentence(s) forming the necessary evidence for their "
119
+ "judgment."
120
+ ),
121
+ homepage="https://fever.ai/dataset/fever.html",
122
+ citation=textwrap.dedent(
123
+ """\
124
+ @inproceedings{Thorne18Fever,
125
+ author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
126
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
127
+ booktitle = {NAACL-HLT},
128
+ year = {2018}
129
+ }"""
130
+ ),
131
+ base_url="https://fever.ai/download/fever",
132
+ urls={
133
+ "wikipedia_pages": "wiki-pages.zip",
134
+ },
135
+ ),
136
+ ]
137
+
138
+ def _info(self):
139
+ if self.config.name == "wiki_pages":
140
+ features = {
141
+ "id": datasets.Value("string"),
142
+ "text": datasets.Value("string"),
143
+ "lines": datasets.Value("string"),
144
+ }
145
+ elif self.config.name == "v1.0" or self.config.name == "v2.0":
146
+ features = {
147
+ "id": datasets.Value("int32"),
148
+ "label": datasets.Value("string"),
149
+ "claim": datasets.Value("string"),
150
+ "evidence_annotation_id": datasets.Value("int32"),
151
+ "evidence_id": datasets.Value("int32"),
152
+ "evidence_wiki_url": datasets.Value("string"),
153
+ "evidence_sentence_id": datasets.Value("int32"),
154
+ }
155
+ return datasets.DatasetInfo(
156
+ description=self.config.description,
157
+ features=datasets.Features(features),
158
+ homepage=self.config.homepage,
159
+ citation=self.config.citation,
160
+ )
161
+
162
+ def _split_generators(self, dl_manager):
163
+ """Returns SplitGenerators."""
164
+ dl_paths = dl_manager.download_and_extract(self.config.urls)
165
+ return [
166
+ datasets.SplitGenerator(
167
+ name=split,
168
+ gen_kwargs={
169
+ "filepath": dl_paths[split]
170
+ if self.config.name != "wiki_pages"
171
+ else dl_manager.iter_files(os.path.join(dl_paths[split], "wiki-pages")),
172
+ },
173
+ )
174
+ for split in dl_paths.keys()
175
+ ]
176
+
177
+ def _generate_examples(self, filepath):
178
+ """Yields examples."""
179
+ if self.config.name == "v1.0" or self.config.name == "v2.0":
180
+ with open(filepath, encoding="utf-8") as f:
181
+ for row_id, row in enumerate(f):
182
+ data = json.loads(row)
183
+ id_ = data["id"]
184
+ label = data.get("label", "")
185
+ claim = data["claim"]
186
+ evidences = data.get("evidence", [])
187
+ if len(evidences) > 0:
188
+ for i in range(len(evidences)):
189
+ for j in range(len(evidences[i])):
190
+ annot_id = evidences[i][j][0] if evidences[i][j][0] else -1
191
+ evidence_id = evidences[i][j][1] if evidences[i][j][1] else -1
192
+ wiki_url = evidences[i][j][2] if evidences[i][j][2] else ""
193
+ sent_id = evidences[i][j][3] if evidences[i][j][3] else -1
194
+ yield str(row_id) + "_" + str(i) + "_" + str(j), {
195
+ "id": id_,
196
+ "label": label,
197
+ "claim": claim,
198
+ "evidence_annotation_id": annot_id,
199
+ "evidence_id": evidence_id,
200
+ "evidence_wiki_url": wiki_url,
201
+ "evidence_sentence_id": sent_id,
202
+ }
203
+ else:
204
+ yield row_id, {
205
+ "id": id_,
206
+ "label": label,
207
+ "claim": claim,
208
+ "evidence_annotation_id": -1,
209
+ "evidence_id": -1,
210
+ "evidence_wiki_url": "",
211
+ "evidence_sentence_id": -1,
212
+ }
213
+ elif self.config.name == "wiki_pages":
214
+ for file_id, file in enumerate(filepath):
215
+ with open(file, encoding="utf-8") as f:
216
+ for row_id, row in enumerate(f):
217
+ data = json.loads(row)
218
+ yield f"{file_id}_{row_id}", data