Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
prompteus commited on
Commit
7656e0c
·
1 Parent(s): b494daa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -13
README.md CHANGED
@@ -121,32 +121,54 @@ This dataset presents in-context scenarios where models can outsource the comput
121
 
122
  ## Construction Process
123
 
124
- We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replace all advanced
125
  function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
126
- evaluation does not match the answer selected as correct in the data with a 5% tolerance. The sequence of steps is then saved in HTML-like language
127
- in the `chain` column. We keep the original columns in the dataset for convenience.
128
 
129
- You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017).
 
 
130
 
 
131
 
132
- ## Content and Data splits
133
 
134
- Content and splits correspond to the original math_qa dataset.
135
- See [mathqa HF dataset](https://huggingface.co/datasets/math_qa) and [official website](https://math-qa.github.io/) for more info.
136
 
137
- Columns:
138
 
139
- - `question` - th description of a mathematical problem in natural language
140
- - `options` - dictionary with choices 'A' to 'E' as possible solutions
 
 
 
 
 
 
 
 
 
 
 
 
 
141
  - `chain` - solution in the form of step-by-step calculations encoded in simple html-like language. computed from `annotated_formula` column
142
  - `result` - the correct option
143
  - `result_float` - the result converted to a float
 
 
144
  - `annotated_formula` - human-annotated nested expression that (approximately) evaluates to the selected correct answer
145
  - `linear_formula` - same as `annotated_formula`, but linearized by original math_qa authors
146
  - `rationale` - human-annotated free-text reasoning that leads to the correct answer
147
- - `index` - index of the example in the original math_qa dataset
 
 
 
148
 
 
149
 
 
 
150
 
151
 
152
  ## Licence
@@ -156,14 +178,14 @@ Apache 2.0, consistently with the original dataset.
156
 
157
  ## Cite
158
 
159
- If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
160
 
161
  ```bibtex
162
  @inproceedings{kadlcik-etal-2023-soft,
163
  title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
164
  author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
165
  booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
166
- month = december,
167
  year = "2023",
168
  address = "Singapore, Singapore",
169
  publisher = "Association for Computational Linguistics",
 
121
 
122
  ## Construction Process
123
 
124
+ We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replaced all advanced
125
  function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
126
+ evaluation does not match the answer selected as correct in the data with a 5% tolerance, with about 26k examples remaining. The sequence of steps is then saved in HTML-like language
127
+ in the `chain` column.
128
 
129
+ We also perform in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
130
+ Specifically for MathQA, we found that majority of validation and test examples are near-duplicates of some example in the train set, and that all validation and test
131
+ examples likely originate from the Aqua-RAT train split. We do not recommend to original validation and test splits of the MathQA dataset.
132
 
133
+ You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).
134
 
 
135
 
136
+ ## Data splits
 
137
 
138
+ In our default configuration, test and validation splits are removed and we recommend using MathQA for training only. You can load it using:
139
 
140
+ ```python3
141
+ datasets.load_dataset("MU-NLPC/calc-math_qa")
142
+ ```
143
+
144
+ If you want to use the original dataset splits, you can load it using:
145
+
146
+ ```python3
147
+ datasets.load_dataset("MU-NLPC/calc-math_qa", "original-splits")
148
+ ```
149
+
150
+
151
+ ## Atributes
152
+
153
+ - `id` - id of the example
154
+ - `question` - the description of a mathematical problem in natural language, and includes the options to be selected from
155
  - `chain` - solution in the form of step-by-step calculations encoded in simple html-like language. computed from `annotated_formula` column
156
  - `result` - the correct option
157
  - `result_float` - the result converted to a float
158
+ - `question_without_options` - same as `question`, but does not contain the options
159
+ - `options` - dictionary of options to choose from, one is correct, keys are "A".."E"
160
  - `annotated_formula` - human-annotated nested expression that (approximately) evaluates to the selected correct answer
161
  - `linear_formula` - same as `annotated_formula`, but linearized by original math_qa authors
162
  - `rationale` - human-annotated free-text reasoning that leads to the correct answer
163
+ - `category` - category of the math problem
164
+
165
+ Attributes `id`, `question`, `chain`, and `result` are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
166
+
167
 
168
+ ## Sources
169
 
170
+ - [mathqa HF dataset](https://huggingface.co/datasets/math_qa)
171
+ - [official website](https://math-qa.github.io/)
172
 
173
 
174
  ## Licence
 
178
 
179
  ## Cite
180
 
181
+ If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
182
 
183
  ```bibtex
184
  @inproceedings{kadlcik-etal-2023-soft,
185
  title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
186
  author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
187
  booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
188
+ month = dec,
189
  year = "2023",
190
  address = "Singapore, Singapore",
191
  publisher = "Association for Computational Linguistics",