medmekk's picture
medmekk HF staff
Upload quantized models
d175dbd verified

medmekk/Minitron-4B-Base.GGUF

GGUF quantized versions of nvidia/Minitron-4B-Base

Available Formats:

  • Q2_K: Minitron-4B-Base-Q2_K.gguf
  • Q3_K_S: Minitron-4B-Base-Q3_K_S.gguf
  • Q3_K_M: Minitron-4B-Base-Q3_K_M.gguf
  • Q3_K_L: Minitron-4B-Base-Q3_K_L.gguf
  • Q4_0: Minitron-4B-Base-Q4_0.gguf
  • Q4_K_S: Minitron-4B-Base-Q4_K_S.gguf
  • Q4_K_M: Minitron-4B-Base-Q4_K_M.gguf
  • Q5_0: Minitron-4B-Base-Q5_0.gguf
  • Q5_K_S: Minitron-4B-Base-Q5_K_S.gguf
  • Q5_K_M: Minitron-4B-Base-Q5_K_M.gguf
  • Q6_K: Minitron-4B-Base-Q6_K.gguf
  • Q8_0: Minitron-4B-Base-Q8_0.gguf
  • IQ3_M: Minitron-4B-Base-IQ3_M_imat.gguf
  • IQ3_XXS: Minitron-4B-Base-IQ3_XXS_imat.gguf
  • IQ4_NL: Minitron-4B-Base-IQ4_NL_imat.gguf
  • Q4_K_M: Minitron-4B-Base-Q4_K_M_imat.gguf
  • Q4_K_S: Minitron-4B-Base-Q4_K_S_imat.gguf
  • IQ4_XS: Minitron-4B-Base-IQ4_XS_imat.gguf
  • Q5_K_M: Minitron-4B-Base-Q5_K_M_imat.gguf
  • Q5_K_S: Minitron-4B-Base-Q5_K_S_imat.gguf

Usage with llama.cpp:

# CLI:
llama-cli --hf-repo medmekk/Minitron-4B-Base.GGUF --hf-file MODEL_FILE -p "Your prompt"

# Server:
llama-server --hf-repo medmekk/Minitron-4B-Base.GGUF --hf-file MODEL_FILE -c 2048