YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

medmekk/Minitron-4B-Base.GGUF

GGUF quantized versions of nvidia/Minitron-4B-Base

Available Formats:

  • Q2_K: Minitron-4B-Base-Q2_K.gguf
  • Q3_K_S: Minitron-4B-Base-Q3_K_S.gguf
  • Q3_K_M: Minitron-4B-Base-Q3_K_M.gguf
  • Q3_K_L: Minitron-4B-Base-Q3_K_L.gguf
  • Q4_0: Minitron-4B-Base-Q4_0.gguf
  • Q4_K_S: Minitron-4B-Base-Q4_K_S.gguf
  • Q4_K_M: Minitron-4B-Base-Q4_K_M.gguf
  • Q5_0: Minitron-4B-Base-Q5_0.gguf
  • Q5_K_S: Minitron-4B-Base-Q5_K_S.gguf
  • Q5_K_M: Minitron-4B-Base-Q5_K_M.gguf
  • Q6_K: Minitron-4B-Base-Q6_K.gguf
  • Q8_0: Minitron-4B-Base-Q8_0.gguf
  • IQ3_M: Minitron-4B-Base-IQ3_M_imat.gguf
  • IQ3_XXS: Minitron-4B-Base-IQ3_XXS_imat.gguf
  • IQ4_NL: Minitron-4B-Base-IQ4_NL_imat.gguf
  • Q4_K_M: Minitron-4B-Base-Q4_K_M_imat.gguf
  • Q4_K_S: Minitron-4B-Base-Q4_K_S_imat.gguf
  • IQ4_XS: Minitron-4B-Base-IQ4_XS_imat.gguf
  • Q5_K_M: Minitron-4B-Base-Q5_K_M_imat.gguf
  • Q5_K_S: Minitron-4B-Base-Q5_K_S_imat.gguf

Usage with llama.cpp:

# CLI:
llama-cli --hf-repo medmekk/Minitron-4B-Base.GGUF --hf-file MODEL_FILE -p "Your prompt"

# Server:
llama-server --hf-repo medmekk/Minitron-4B-Base.GGUF --hf-file MODEL_FILE -c 2048
Downloads last month
372
GGUF
Model size
4.19B params
Architecture
nemotron

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including medmekk/Minitron-4B-Base.GGUF