Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
REPOCOD_Lite / README.md
shanchao's picture
Upload dataset
774539e verified
metadata
dataset_info:
  features:
    - name: repository
      dtype: string
    - name: repo_id
      dtype: string
    - name: target_module_path
      dtype: string
    - name: prompt
      dtype: string
    - name: relavent_test_path
      dtype: string
    - name: full_function
      dtype: string
    - name: function_name
      dtype: string
    - name: context-complexity
      dtype: string
  splits:
    - name: test
      num_bytes: 2406711
      num_examples: 200
  download_size: 937112
  dataset_size: 2406711
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'

Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.

To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.

For easier evaluation, we sample 200 of the hardest problems in REPOCOD to create REPOCOD-Lite, using the product of the prompt length and canonical solution length (in terms of line count) as an indicator of difficulty. From the three categories of questions—self-contained, file-level, and repo-level—we select 66, 67, and 67 samples respectively in descending order of the scores.

  • For more details on data collection and evaluation results, please refer to our arxiv preprint.

  • Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at code

  • Check our Leaderboard for preliminary results using SOTA LLMs with RAG.

Usage

from datasets import load_dataset
data = load_dataset('lt-asset/REPOCOD_Lite')
print(data)
DatasetDict({
    train: Dataset({
        features: ['repository', 'repo_id', 'target_module_path', 'prompt', 'relavent_test_path', 'full_function', 'function_name'],
        num_rows: 200
    })
})

Data Fields

  • repository: the source repository of the current sample
  • repo_id: the unique index of the sample in the corresponding source repository
  • target_module_path: the file path containing the current sample relative to the root of the source repository
  • prompt: the developer provided function signature and docstring
  • relavent_test_path: the path to the relevant test cases
  • full_function: the canonical solution of the current sample
  • function_name: the name of the target function (current sample)

Example

"repository": "seaborn",                          # collected from seaborn
"repo_id": "6",                                   # first sample from seaborn 
"target_module_path": "seaborn/_base.py",  # the target function is in this path
"prompt": "     def iter_data(
        self, grouping_vars=None, *,
        reverse=False, from_comp_data=False,
        by_facet=True, allow_empty=False, dropna=True,
    ):
        '''Generator for getting subsets of data defined by semantic variables.

        Also injects "col" and "row" into grouping semantics.

        Parameters
        ----------
        grouping_vars : string or list of strings
            Semantic variables that define the subsets of data.
        reverse : bool
            If True, reverse the order of iteration.
        from_comp_data : bool
            If True, use self.comp_data rather than self.plot_data
        by_facet : bool
            If True, add faceting variables to the set of grouping variables.
        allow_empty : bool
            If True, yield an empty dataframe when no observations exist for
            combinations of grouping variables.
        dropna : bool
            If True, remove rows with missing data.

        Yields
        ------
        sub_vars : dict
            Keys are semantic names, values are the level of that semantic.
        sub_data : :class:`pandas.DataFrame`
            Subset of ``plot_data`` for this combination of semantic values.

        '''",                            # the function signature and docstring for the target function
"relevant_test_path": "/usr/src/app/target_test_cases/failed_tests_Continuous.label.txt", # Path to relevant tests for the function
"full_function": "     def iter_data(
        self, grouping_vars=None, *,
        reverse=False, from_comp_data=False,
        by_facet=True, allow_empty=False, dropna=True,
    ):
        '''Generator for getting subsets of data defined by semantic variables.

        Also injects "col" and "row" into grouping semantics.

        Parameters
        ----------
        grouping_vars : string or list of strings
            Semantic variables that define the subsets of data.
        reverse : bool
            If True, reverse the order of iteration.
        from_comp_data : bool
            If True, use self.comp_data rather than self.plot_data
        by_facet : bool
            If True, add faceting variables to the set of grouping variables.
        allow_empty : bool
            If True, yield an empty dataframe when no observations exist for
            combinations of grouping variables.
        dropna : bool
            If True, remove rows with missing data.

        Yields
        ------
        sub_vars : dict
            Keys are semantic names, values are the level of that semantic.
        sub_data : :class:`pandas.DataFrame`
            Subset of ``plot_data`` for this combination of semantic values.

        '''
        if grouping_vars is None:
            grouping_vars = []
        ...",                            # the full snippet of the target function, including the function signature and docstring for the target function
"function_name": "VectorPlotter.iter_data"               # The name of the target function

Citation

@misc{liang2024repocod,
      title={Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'}, 
      author={Shanchao Liang and Yiran Hu and Nan Jiang and Lin Tan},
      year={2024},
      eprint={2410.21647},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2410.21647}, 
}