![](https://cdn-avatars.huggingface.co/v1/production/uploads/6313a86154e6e5d9f0f94e04/Noi3Qq3RYz8Jdq6BaFteq.png)
TIGER-Lab/AceCodeRM-7B
Updated
•
31
•
1
Note The state-of-the-art 7B reward model for code generation
Note The state-of-the-art 32B reward model for code generation
Note The first large-scale coding dataset with an average of 16 test cases per prompt, synthesized by GPT-4o-mini
Note DeepSeek-R1 style RL-tuned model with binary pass rate as the verifiable reward