Datasets:
metadata
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
tags:
- SDQP
- scholarly
- citation_count_prediction
- review_score_prediction
dataset_info:
features:
- name: paperhash
dtype: string
- name: s2_corpus_id
dtype: string
- name: arxiv_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: authors
sequence:
- name: name
dtype: string
- name: affiliation
struct:
- name: laboratory
dtype: string
- name: institution
dtype: string
- name: location
dtype: string
- name: summary
dtype: string
- name: field_of_study
sequence: string
- name: venue
dtype: string
- name: publication_date
dtype: string
- name: openreview_submission_id
dtype: string
- name: n_references
dtype: int32
- name: n_citations
dtype: int32
- name: n_influential_citations
dtype: int32
- name: introduction
dtype: string
- name: background
dtype: string
- name: methodology
dtype: string
- name: experiments_results
dtype: string
- name: conclusion
dtype: string
- name: full_text
dtype: string
- name: decision
dtype: bool
- name: decision_text
dtype: string
- name: reviews
sequence:
- name: review_id
dtype: string
- name: review
struct:
- name: title
dtype: string
- name: paper_summary
dtype: string
- name: main_review
dtype: string
- name: strength_weakness
dtype: string
- name: questions
dtype: string
- name: limitations
dtype: string
- name: review_summary
dtype: string
- name: score
dtype: float32
- name: confidence
dtype: float32
- name: novelty
dtype: float32
- name: correctness
dtype: float32
- name: clarity
dtype: float32
- name: impact
dtype: float32
- name: reproducibility
dtype: float32
- name: ethics
dtype: string
- name: comments
sequence:
- name: title
dtype: string
- name: comment
dtype: string
- name: references
sequence:
- name: paperhash
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: authors
sequence:
- name: name
dtype: string
- name: affiliation
struct:
- name: laboratory
dtype: string
- name: institution
dtype: string
- name: location
dtype: string
- name: arxiv_id
dtype: string
- name: s2_corpus_id
dtype: string
- name: intents
sequence: string
- name: isInfluential
dtype: bool
- name: hypothesis
dtype: string
- name: month_since_publication
dtype: int32
- name: avg_citations_per_month
dtype: float32
- name: mean_score
dtype: float32
- name: mean_confidence
dtype: float32
- name: mean_novelty
dtype: float32
- name: mean_correctness
dtype: float32
- name: mean_clarity
dtype: float32
- name: mean_impact
dtype: float32
- name: mean_reproducibility
dtype: float32
splits:
- name: openreview_full_training
num_bytes: 406640201
num_examples: 5197
- name: openreview_full_validation
num_bytes: 406547240
num_examples: 5195
- name: openreview_full_test
num_bytes: 406640201
num_examples: 5197
download_size: 590644058
dataset_size: 1219827642
configs:
- config_name: acl_ocl
data_files:
- split: train
path: data/acl_ocl_train-*
- split: validation
path: data/acl_ocl_validation-*
- split: test
path: data/acl_ocl_test-*
- config_name: default
data_files:
- split: openreview_full_training
path: data/openreview_full_training-*
- split: openreview_full_validation
path: data/openreview_full_validation-*
- split: openreview_full_test
path: data/openreview_full_test-*
- config_name: openreview-full
data_files:
- split: train
path: data/openreview_full_training-*
- split: validation
path: data/openreview_full_validation-*
- split: test
path: data/openreview_full_test-*
- config_name: openreview-iclr
data_files:
- split: train
path: data/iclr_training-*
- split: validation
path: data/iclr_validation-*
- split: test
path: data/iclr_test-*
- config_name: openreview-neurips
data_files:
- split: train
path: data/neurips_training-*
- split: validation
path: data/neurips_validation-*
- split: test
path: data/neurips_test-*
- config_name: openreview-public
data_files:
- split: train
path: data/openreview_public_train-*
- split: validation
path: data/openreview_public_validation-*
- split: test
path: data/openreview_public_test-*
Datasets related to the task of Scholarly Document Quality Prediction (SDQP). Each sample is an academic paper for which either the citation count or the review score can be predicted (depending on availability). The information that is potentially available for each sample can be found below.
ACL-OCL Extended
A dataset for citation count prediction only, based on the ACL-OCL dataset. Extended with updated citation counts, references and annotated research hypothesis
OpenReview
A dataset for review score and citation count prediction, obtained by parsing OpenReview. Due to licensing the dataset comes in different formats:
Datasets without parsed pdfs of submissions (i.e. the fields introduction, background, methodology, experiments_results, conclusion, full_text are available)
- openreview-public: Contains full information on all OpenReview submissions that are accompanied with a CC BY 4.0 license.
Datasets without parsed pdfs of submissions (i.e. the fields introduction, background, methodology, experiments_results, conclusion, full_text are None)
- openreview-full: Contains all OpenReview submissions, splits generated based on publications dates.
- openreview-iclr: All ICLR submissions from the years 2018-2023 (training) into 2024 (validation and training).
- openreview-neurips: All NeurIPS submissions from the years 2021-2023 (training) into 2024 (validation and training).
Citation
If you use the dataset in your work, please cite: CC BY 4.0
The data model for the papers:
Paper Data Model
{
# ID's
"paperhash": str,
"arxiv_id": str | None,
"s2_corpus_id": str | None,
# Basic Info
"title":str,
"authors": list[Author],
"abstract": str | None,
"summary": str | None,
"publication_date": str | None,
# OpenReview Metadata
"field_of_study": list[str] | str | None,
"venue": str | None,
# s2 Metadata
"n_references": int | None,
"n_citations": int | None,
"n_influential_citations": int | None,
"open_access": bool | None,
"external_ids": dict | None,
"pdf_url": str | None,
# Content
"parsed_pdf": dict | None,
"parsed_latex": dict | None,
"structured_content": dict[str, Section],
# Review Data
"openreview": bool,
"decision": bool | None,
"decision_text": str | None,
"reviews": list[Review] | None,
"comments": list[Comment] | None,
# References
"references": list[Reference] | None,
"bibref2section": dict,
"bibref2paperhash": dict,
# Hypothesis
"hypothesis": dict | None
}
Author Data Model
{
"name":str,
"affiliation": {
"laboratory": str | dict | None,
"institution": str | dict | None,
"location": str | dict | None
}
}
Reference Data Model
{
"paperhash": str,
"title": str,
"abstract": str = "",
"authors": list[Author],
# IDs
"arxiv_id": str | None,
"s2_corpus_id": str | None,
"external_ids": dict| None,
# Reference specific info
"intents": list[str] | None = None,
"isInfluential": bool | None = None
}
Comment Data Model
{
"title": str,
"comment": str
}
Section Data Model
{
"name": str,
"sec_num": str,
"classification": str,
"text": str,
"subsections": list[Section]
}
Review Data Model
{
"review_id": str,
"review": {
"title": str | None,
"paper_summary": str | None,
"main_review": str | None,
"strength_weakness": str | None,
"questions": str | None,
"limitations": str | None,
"review_summary": str | None
}
"score": float | None,
"confidence": float | None,
"novelty": float | None,
"correctness": float | None,
"clarity": float | None,
"impact": float | None,
"reproducibility": float | None,
"ethics": str | None
}