MehdiHosseiniMoghadam commited on
Commit
04f871d
·
verified ·
1 Parent(s): 5f584cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -46
README.md CHANGED
@@ -1,48 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- dataset_info:
3
- features:
4
- - name: pre_text
5
- sequence: string
6
- - name: post_text
7
- sequence: string
8
- - name: filename
9
- dtype: string
10
- - name: table_ori
11
- sequence:
12
- sequence: string
13
- - name: table
14
- sequence:
15
- sequence: string
16
- - name: question
17
- dtype: string
18
- - name: answer
19
- dtype: string
20
- - name: steps
21
- list:
22
- - name: arg1
23
- dtype: string
24
- - name: arg2
25
- dtype: string
26
- - name: op
27
- dtype: string
28
- - name: res
29
- dtype: string
30
- - name: id
31
- dtype: string
32
- splits:
33
- - name: train
34
- num_bytes: 14261383
35
- num_examples: 3037
36
- - name: test
37
- num_bytes: 970785
38
- num_examples: 200
39
- download_size: 6315858
40
- dataset_size: 15232168
41
- configs:
42
- - config_name: default
43
- data_files:
44
- - split: train
45
- path: data/train-*
46
- - split: test
47
- path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ---
 
 
 
1
+
2
+ title: Financial Document QA Dataset
3
+ tags:
4
+ - financial-documents
5
+ - question-answering
6
+ - tabular-data
7
+ - machine-learning
8
+ license: apache-2.0
9
+ datasets:
10
+ - financial-document-qa
11
+ language: en
12
  ---
13
+
14
+ # **ConvFinQA: Financial Document QA Dataset**
15
+
16
+ ### **Dataset Summary**
17
+
18
+ The ConvFinQA dataset and code from EMNLP 2022 paper(https://arxiv.org/abs/2210.03849)
19
+
20
+ ### **Dataset Features**
21
+
22
+ - **Pre-text**: Contextual paragraphs that precede a table, giving information relevant to the financial table.
23
+ - **Post-text**: Context that follows the table, providing additional explanation or details.
24
+ - **Filename**: The source PDF filename from which the data was extracted.
25
+ - **Table_ori**: The original table content as it appears in the document.
26
+ - **Table**: The cleaned and normalized table format for easier data consumption.
27
+ - **QA**: Structured question-answer pairs, including reasoning steps and annotated text from the document.
28
+
29
+ ### **Dataset Structure**
30
+
31
+ This dataset is provided in a `DatasetDict` with `train` and `test` splits:
32
+
33
+ - **Train Dataset**: Contains structured data from financial documents, designed for training machine learning models.
34
+ - **Test Dataset**: The first 200 elements from the training data are selected for validation purposes.
35
+
36
+ Each split contains the following fields:
37
+ - **pre_text**: `List[str]`
38
+ - **post_text**: `List[str]`
39
+ - **filename**: `str`
40
+ - **table_ori**: `List[List[str]]`
41
+ - **table**: `List[List[str]]`
42
+ - **question**: `str`
43
+ - **answer**: `str`
44
+ - **steps**: `List[dict]`
45
+ - **id**: `str`
46
+
47
+ ### **Example Data**
48
+ Here’s a quick look at the data format:
49
+
50
+ ```json
51
+ {
52
+ "pre_text": ["value , which may be maturity ..."],
53
+ "post_text": ["."],
54
+ "filename": "VRTX/2005/page_103.pdf",
55
+ "table_ori": [["", "2005", "2004"], ["Furniture and equipment", "$98,387", "$90,893"]],
56
+ "table": [["", "2005", "2004"], ["furniture and equipment", "$ 98387", "$ 90893"]],
57
+ "question": "What is the percent change in net loss on disposal of assets between 2004 and 2005?",
58
+ "answer": "700%",
59
+ "steps": [
60
+ {"op": "minus1-1", "arg1": "344000", "arg2": "43000", "res": "301000"},
61
+ {"op": "divide1-2", "arg1": "#0", "arg2": "43000", "res": "700%"}
62
+ ],
63
+ "id": "Single_VRTX/2005/page_103.pdf-1"
64
+ }
65
+ ```
66
+
67
+ ### **Use Cases**
68
+
69
+ 1. **Question-Answering**: The dataset is ideal for training models that can answer complex questions about financial data and documents.
70
+ 2. **Table Understanding**: The cleaned tables provide a clear format for understanding structured tabular data.
71
+ 3. **Financial Document Parsing**: Can be used to develop models that parse financial documents and extract relevant information.
72
+
73
+ ### **Loading the Dataset**
74
+
75
+ You can load the dataset from the Hugging Face Hub using the following code:
76
+
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ dataset = load_dataset("MehdiHosseiniMoghadam/financial-document-qa")
81
+ ```
82
+
83
+ ref: https://github.com/czyssrs/ConvFinQA/tree/main?tab=readme-ov-file
84
+ ```
85
+
86
  ---
87
+
88
+ You can adjust the title and citation information as necessary depending on the specific details of your dataset. Once your dataset is uploaded to the Hugging Face Hub, it will also automatically generate some sections like usage, but this card will help structure important information about your dataset!