Update README.md
Browse files
README.md
CHANGED
@@ -31,10 +31,32 @@ This model is based on [deepseek-coder-1.3b-base](https://huggingface.co/deepsee
|
|
31 |
|
32 |
The performance of the OpenCodeInterpreter-DS-1.3B is highlighted below, showcasing the improvements when execution feedback is incorporated. Scores are presented for two benchmarks: HumanEval and MBPP, with an average increase indicated to demonstrate the overall enhancement in performance.
|
33 |
|
34 |
-
| **Benchmark**
|
35 |
-
|
36 |
-
| **OpenCodeInterpreter-DS-1.3B** |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
*Note: The "(+)" notation represents scores from extended versions of the HumanEval and MBPP benchmarks. To ensure a fair comparison, the results shown for adding execution feedback are based on outcomes after just one iteration of feedback, without unrestricted iterations. This approach highlights the immediate impact of execution feedback on performance improvements across benchmarks.*
|
40 |
|
|
|
31 |
|
32 |
The performance of the OpenCodeInterpreter-DS-1.3B is highlighted below, showcasing the improvements when execution feedback is incorporated. Scores are presented for two benchmarks: HumanEval and MBPP, with an average increase indicated to demonstrate the overall enhancement in performance.
|
33 |
|
34 |
+
| **Benchmark** | **HumanEval (+)** | **MBPP (+)** | **Average (+)** |
|
35 |
+
|---------------|-------------------|--------------|-----------------|
|
36 |
+
| **OpenCodeInterpreter-DS-1.3B** | 65.2 (61.0) | 63.4 (52.4) | 64.3 (56.7) |
|
37 |
+
| + Execution Feedback | 65.2 (62.2) | 65.2 (55.6) | 65.2 (58.9) |
|
38 |
+
| **OpenCodeInterpreter-DS-6.7B** | 76.2 (72.0) | 73.9 (63.7) | 75.1 (67.9) |
|
39 |
+
| + Execution Feedback | 81.1 (78.7) | 82.7 (72.4) | 81.9 (75.6) |
|
40 |
+
| + Synth. Human Feedback | 87.2 (86.6) | 86.2 (74.2) | 86.7 (80.4) |
|
41 |
+
| + Synth. Human Feedback (Oracle) | 89.7 (86.6) | 87.2 (75.2) | 88.5 (80.9) |
|
42 |
+
| **OpenCodeInterpreter-DS-33B** | 79.3 (74.3) | 78.7 (66.4) | 79.0 (70.4) |
|
43 |
+
| + Execution Feedback | 82.9 (80.5) | 83.5 (72.2) | 83.2 (76.4) |
|
44 |
+
| + Synth. Human Feedback | 88.4 (86.0) | 87.5 (75.9) | 88.0 (81.0) |
|
45 |
+
| + Synth. Human Feedback (Oracle) | 92.7 (89.7) | 90.5 (79.5) | 91.6 (84.6) |
|
46 |
+
| **OpenCodeInterpreter-CL-7B** | 72.6 (67.7) | 66.4 (55.4) | 69.5 (61.6) |
|
47 |
+
| + Execution Feedback | 75.6 (70.1) | 69.9 (60.7) | 72.8 (65.4) |
|
48 |
+
| **OpenCodeInterpreter-CL-13B** | 77.4 (73.8) | 70.7 (59.2) | 74.1 (66.5) |
|
49 |
+
| + Execution Feedback | 81.1 (76.8) | 78.2 (67.2) | 79.7 (72.0) |
|
50 |
+
| **OpenCodeInterpreter-CL-34B** | 78.0 (72.6) | 73.4 (61.4) | 75.7 (67.0) |
|
51 |
+
| + Execution Feedback | 81.7 (78.7) | 80.2 (67.9) | 81.0 (73.3) |
|
52 |
+
| **OpenCodeInterpreter-CL-70B** | 76.2 (70.7) | 73.0 (61.9) | 74.6 (66.3) |
|
53 |
+
| + Execution Feedback | 79.9 (77.4) | 81.5 (69.9) | 80.7 (73.7) |
|
54 |
+
| **OpenCodeInterpreter-GM-7B** | 56.1 (50.0) | 39.8 (34.6) | 48.0 (42.3) |
|
55 |
+
| + Execution Feedback | 64.0 (54.3) | 48.6 (40.9) | 56.3 (47.6) |
|
56 |
+
| **OpenCodeInterpreter-STAR-3B** | 65.2 (57.9) | 62.7 (52.9) | 64.0 (55.4) |
|
57 |
+
| + Execution Feedback | 67.1 (60.4) | 63.4 (54.9) | 65.3 (57.7) |
|
58 |
+
| **OpenCodeInterpreter-STAR-7B** | 73.8 (68.9) | 61.7 (51.1) | 67.8 (60.0) |
|
59 |
+
| + Execution Feedback | 75.6 (69.5) | 66.9 (55.4) | 71.3 (62.5) |
|
60 |
|
61 |
*Note: The "(+)" notation represents scores from extended versions of the HumanEval and MBPP benchmarks. To ensure a fair comparison, the results shown for adding execution feedback are based on outcomes after just one iteration of feedback, without unrestricted iterations. This approach highlights the immediate impact of execution feedback on performance improvements across benchmarks.*
|
62 |
|