nhop's picture
Update README.md
4ea8860 verified
|
raw
history blame
3.43 kB
metadata
license: cc-by-4.0
task_categories:
  - text-classification
language:
  - en
tags:
  - SDQP
  - scholarly
  - citation_count_prediction
  - review_score_prediction
configs:
  - config_name: acl_ocl
    data_files:
      - split: train
        path: acl_ocl/*.json

Datasets related to the task of Scholarly Document Quality Prediction (SDQP). Each sample is an academic paper for which either the citation count or the review score can be predicted (depending on availability). The information that is potentially available for each sample can be found below.

ACL-OCL Extended

A dataset for citation count prediction only, based on the ACL-OCL dataset. Extended with updated citation counts, references and annotated research hypothesis

OpenReview

A dataset for review score and citation count prediction, obtained by parsing OpenReview. Due to licensing the dataset comes in 3 formats:

  1. openreview-public: Contains full information on all OpenReview submissions that are accompanied by a license.
  2. openreview-full-light: The full dataset excluding the parsed pdfs of the submitted papers.
  3. openreview-full: A script to obtain the full dataset with submissions.

Citation

If you use the dataset in your work, please cite:

The data model for the papers:

Paper Data Model

{
# ID's
"paperhash": str,
"arxiv_id": str | None,
"s2_corpus_id": str | None,

# Basic Info
"title":str,
"authors": list[Author],
"abstract": str | None,
"summary": str | None,
"publication_date": str | None,

# OpenReview Metadata
"field_of_study": list[str] | str | None,
"venue": str | None,

# s2 Metadata
"n_references": int | None,
"n_citations": int | None,
"n_influential_citations": int | None,
"open_access": bool | None,
"external_ids": dict | None,
"pdf_url": str | None,

# Content
"parsed_pdf": dict | None,
"parsed_latex": dict | None,
"structured_content": dict[str, Section],

# Review Data
"openreview": bool,
"decision": bool | None,
"decision_text": str | None,
"reviews": list[Review] | None,
"comments": list[Comment] | None,

# References
"references": list[Reference] | None,
"bibref2section": dict,
"bibref2paperhash": dict,

# Hypothesis
"hypothesis": dict | None
}

Author Data Model

{
"name":str,
"affiliation": {
  "laboratory": str | dict | None,
  "institution": str | dict | None, 
  "location": str | dict | None
  }
}

Reference Data Model

{
"paperhash": str,
"title": str,
"abstract": str = "",
"authors": list[Author],

# IDs
"arxiv_id": str | None,
"s2_corpus_id": str | None,
"external_ids": dict| None,

# Reference specific info
"intents": list[str] | None = None,
"isInfluential": bool | None = None
}

Comment Data Model

{
"title": str,
"comment": str
}

Section Data Model

{
"name": str,
"sec_num": str,
"classification": str,
"text": str,
"subsections": list[Section]
}

Review Data Model

{
"review_id": str,
"review": {
  "title": str | None,
  "paper_summary": str | None,
  "main_review": str | None,
  "strength_weakness": str | None,
  "questions": str | None,
  "limitations": str | None,
  "review_summary": str | None
}
"score": float | None,
"confidence": float | None,
"novelty": float | None,
"correctness": float | None,
"clarity": float | None,
"impact": float | None,
"reproducibility": float | None,
"ethics": str | None
}