--- license: cc-by-4.0 task_categories: - text-classification language: - en tags: - SDQP - scholarly - citation_count_prediction - review_score_prediction configs: - config_name: acl_ocl data_files: - split: train path: acl_ocl/*.json - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: paperhash dtype: string - name: s2_corpus_id dtype: string - name: arxiv_id dtype: string - name: title dtype: string - name: abstract dtype: string - name: authors sequence: - name: name dtype: string - name: affiliation struct: - name: laboratory dtype: string - name: institution dtype: string - name: location dtype: string - name: summary dtype: string - name: field_of_study sequence: string - name: venue dtype: string - name: publication_date dtype: string - name: n_references dtype: int32 - name: n_citations dtype: int32 - name: n_influential_citations dtype: int32 - name: introduction dtype: string - name: background dtype: string - name: methodology dtype: string - name: experiments_results dtype: string - name: conclusion dtype: string - name: full_text dtype: string - name: decision dtype: bool - name: decision_text dtype: string - name: reviews sequence: - name: review_id dtype: string - name: review struct: - name: title dtype: string - name: paper_summary dtype: string - name: main_review dtype: string - name: strength_weakness dtype: string - name: questions dtype: string - name: limitations dtype: string - name: review_summary dtype: string - name: score dtype: float32 - name: confidence dtype: float32 - name: novelty dtype: float32 - name: correctness dtype: float32 - name: clarity dtype: float32 - name: impact dtype: float32 - name: reproducibility dtype: float32 - name: ethics dtype: string - name: comments sequence: - name: title dtype: string - name: comment dtype: string - name: references sequence: - name: paperhash dtype: string - name: title dtype: string - name: abstracts dtype: string - name: authors sequence: - name: name dtype: string - name: affiliation struct: - name: laboratory dtype: string - name: institution dtype: string - name: location dtype: string - name: arxiv_id dtype: string - name: s2_corpus_id dtype: string - name: intents sequence: string - name: isInfluential dtype: bool - name: hypothesis dtype: string splits: - name: train num_bytes: 5134 num_examples: 2 download_size: 44577 dataset_size: 5134 --- Datasets related to the task of Scholarly Document Quality Prediction (SDQP). Each sample is an academic paper for which either the citation count or the review score can be predicted (depending on availability). The information that is potentially available for each sample can be found below. ## ACL-OCL Extended A dataset for citation count prediction only, based on the [ACL-OCL dataset](https://huggingface.co/datasets/WINGNUS/ACL-OCL/tree/main). Extended with updated citation counts, references and annotated research hypothesis ## OpenReview A dataset for review score and citation count prediction, obtained by parsing OpenReview. Due to licensing the dataset comes in 3 formats: 1. openreview-public: Contains full information on all OpenReview submissions that are accompanied by a license. 2. openreview-full-light: The full dataset excluding the parsed pdfs of the submitted papers. 3. openreview-full: A script to obtain the full dataset with submissions. ## Citation If you use the dataset in your work, please cite: The data model for the papers: ### Paper Data Model ```json { # ID's "paperhash": str, "arxiv_id": str | None, "s2_corpus_id": str | None, # Basic Info "title":str, "authors": list[Author], "abstract": str | None, "summary": str | None, "publication_date": str | None, # OpenReview Metadata "field_of_study": list[str] | str | None, "venue": str | None, # s2 Metadata "n_references": int | None, "n_citations": int | None, "n_influential_citations": int | None, "open_access": bool | None, "external_ids": dict | None, "pdf_url": str | None, # Content "parsed_pdf": dict | None, "parsed_latex": dict | None, "structured_content": dict[str, Section], # Review Data "openreview": bool, "decision": bool | None, "decision_text": str | None, "reviews": list[Review] | None, "comments": list[Comment] | None, # References "references": list[Reference] | None, "bibref2section": dict, "bibref2paperhash": dict, # Hypothesis "hypothesis": dict | None } ``` ### Author Data Model ```json { "name":str, "affiliation": { "laboratory": str | dict | None, "institution": str | dict | None, "location": str | dict | None } } ``` ### Reference Data Model ```json { "paperhash": str, "title": str, "abstract": str = "", "authors": list[Author], # IDs "arxiv_id": str | None, "s2_corpus_id": str | None, "external_ids": dict| None, # Reference specific info "intents": list[str] | None = None, "isInfluential": bool | None = None } ``` ### Comment Data Model ```json { "title": str, "comment": str } ``` ### Section Data Model ```json { "name": str, "sec_num": str, "classification": str, "text": str, "subsections": list[Section] } ``` ### Review Data Model ```json { "review_id": str, "review": { "title": str | None, "paper_summary": str | None, "main_review": str | None, "strength_weakness": str | None, "questions": str | None, "limitations": str | None, "review_summary": str | None } "score": float | None, "confidence": float | None, "novelty": float | None, "correctness": float | None, "clarity": float | None, "impact": float | None, "reproducibility": float | None, "ethics": str | None } ```