GGUF quants of nvidia/AceMath-72B-Instruct

Paper link on arXiv

Using llama.cpp b4682 (commit 0893e0114e934bdd0eba0ff69d9ef8c59343cbc3)

The importance matrix was generated with groups_merged-enhancedV3.txt by InferenceIllusionist (later renamed calibration_datav3.txt), an edited version of kalomaze's original groups_merged.txt.

All quants were generated/calibrated with the imatrix, including the K quants.

Downloads last month
882
GGUF
Model size
72.7B params
Architecture
qwen2

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for redponike/AceMath-72B-Instruct-GGUF

Base model

Qwen/Qwen2.5-72B
Quantized
(7)
this model