Update README.md
Browse files
README.md
CHANGED
@@ -1,48 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
---
|
|
|
|
|
|
1 |
+
|
2 |
+
title: Financial Document QA Dataset
|
3 |
+
tags:
|
4 |
+
- financial-documents
|
5 |
+
- question-answering
|
6 |
+
- tabular-data
|
7 |
+
- machine-learning
|
8 |
+
license: apache-2.0
|
9 |
+
datasets:
|
10 |
+
- financial-document-qa
|
11 |
+
language: en
|
12 |
---
|
13 |
+
|
14 |
+
# **ConvFinQA: Financial Document QA Dataset**
|
15 |
+
|
16 |
+
### **Dataset Summary**
|
17 |
+
|
18 |
+
The ConvFinQA dataset and code from EMNLP 2022 paper(https://arxiv.org/abs/2210.03849)
|
19 |
+
|
20 |
+
### **Dataset Features**
|
21 |
+
|
22 |
+
- **Pre-text**: Contextual paragraphs that precede a table, giving information relevant to the financial table.
|
23 |
+
- **Post-text**: Context that follows the table, providing additional explanation or details.
|
24 |
+
- **Filename**: The source PDF filename from which the data was extracted.
|
25 |
+
- **Table_ori**: The original table content as it appears in the document.
|
26 |
+
- **Table**: The cleaned and normalized table format for easier data consumption.
|
27 |
+
- **QA**: Structured question-answer pairs, including reasoning steps and annotated text from the document.
|
28 |
+
|
29 |
+
### **Dataset Structure**
|
30 |
+
|
31 |
+
This dataset is provided in a `DatasetDict` with `train` and `test` splits:
|
32 |
+
|
33 |
+
- **Train Dataset**: Contains structured data from financial documents, designed for training machine learning models.
|
34 |
+
- **Test Dataset**: The first 200 elements from the training data are selected for validation purposes.
|
35 |
+
|
36 |
+
Each split contains the following fields:
|
37 |
+
- **pre_text**: `List[str]`
|
38 |
+
- **post_text**: `List[str]`
|
39 |
+
- **filename**: `str`
|
40 |
+
- **table_ori**: `List[List[str]]`
|
41 |
+
- **table**: `List[List[str]]`
|
42 |
+
- **question**: `str`
|
43 |
+
- **answer**: `str`
|
44 |
+
- **steps**: `List[dict]`
|
45 |
+
- **id**: `str`
|
46 |
+
|
47 |
+
### **Example Data**
|
48 |
+
Here’s a quick look at the data format:
|
49 |
+
|
50 |
+
```json
|
51 |
+
{
|
52 |
+
"pre_text": ["value , which may be maturity ..."],
|
53 |
+
"post_text": ["."],
|
54 |
+
"filename": "VRTX/2005/page_103.pdf",
|
55 |
+
"table_ori": [["", "2005", "2004"], ["Furniture and equipment", "$98,387", "$90,893"]],
|
56 |
+
"table": [["", "2005", "2004"], ["furniture and equipment", "$ 98387", "$ 90893"]],
|
57 |
+
"question": "What is the percent change in net loss on disposal of assets between 2004 and 2005?",
|
58 |
+
"answer": "700%",
|
59 |
+
"steps": [
|
60 |
+
{"op": "minus1-1", "arg1": "344000", "arg2": "43000", "res": "301000"},
|
61 |
+
{"op": "divide1-2", "arg1": "#0", "arg2": "43000", "res": "700%"}
|
62 |
+
],
|
63 |
+
"id": "Single_VRTX/2005/page_103.pdf-1"
|
64 |
+
}
|
65 |
+
```
|
66 |
+
|
67 |
+
### **Use Cases**
|
68 |
+
|
69 |
+
1. **Question-Answering**: The dataset is ideal for training models that can answer complex questions about financial data and documents.
|
70 |
+
2. **Table Understanding**: The cleaned tables provide a clear format for understanding structured tabular data.
|
71 |
+
3. **Financial Document Parsing**: Can be used to develop models that parse financial documents and extract relevant information.
|
72 |
+
|
73 |
+
### **Loading the Dataset**
|
74 |
+
|
75 |
+
You can load the dataset from the Hugging Face Hub using the following code:
|
76 |
+
|
77 |
+
```python
|
78 |
+
from datasets import load_dataset
|
79 |
+
|
80 |
+
dataset = load_dataset("MehdiHosseiniMoghadam/financial-document-qa")
|
81 |
+
```
|
82 |
+
|
83 |
+
ref: https://github.com/czyssrs/ConvFinQA/tree/main?tab=readme-ov-file
|
84 |
+
```
|
85 |
+
|
86 |
---
|
87 |
+
|
88 |
+
You can adjust the title and citation information as necessary depending on the specific details of your dataset. Once your dataset is uploaded to the Hugging Face Hub, it will also automatically generate some sections like usage, but this card will help structure important information about your dataset!
|