--- base_model: - Qwen/Qwen2.5-Math-72B --- GGUF quants of [nvidia/AceMath-72B-Instruct](https://huggingface.co/nvidia/AceMath-72B-Instruct) [Paper link on arXiv](https://arxiv.org/abs/2412.15084) Using llama.cpp b4682 (commit 0893e0114e934bdd0eba0ff69d9ef8c59343cbc3) The importance matrix was generated with [groups_merged-enhancedV3.txt](https://github.com/ggerganov/llama.cpp/files/15440637/groups_merged-enhancedV3.txt) by InferenceIllusionist (later renamed calibration_datav3.txt), an edited version of kalomaze's original groups_merged.txt. All quants were generated/calibrated with the imatrix, including the K quants.