Evaluation Result

#15
by tanliboy - opened

I conducted an evaluation of the model, and while the inference speed is impressive, I was unable to replicate the performance results reported in the paper. Below are the results I got:

Groups Version Filter n-shot Metric Value Stderr
mmlu 1 none acc 0.5486 ± 0.0040
- humanities 1 none acc 0.4997 ± 0.0069
- other 1 none acc 0.6308 ± 0.0083
- social sciences 1 none acc 0.6406 ± 0.0085
- stem 1 none acc 0.4510 ± 0.0086
Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 0 acc 0.6188 ± 0.0048
none 0 acc_norm 0.8026 ± 0.0040

The difference might be due to differences in evaluation settings. Overall, the model's performance seems outdated compared to the latest models. Do we have any plans to release an updated version on Griffin architecture?

Google org

Hi @tanliboy , As for your question about releasing an updated version of the Griffin architecture, I currently don’t have direct information regarding upcoming releases or updates to this specific architecture.
If possible, Kindly try fine-tuning Griffin on the specific tasks you are working on. Fine-tuning can lead to significant improvements over the base model.

Thank you.

Sign up or log in to comment