--- base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Llama-3.3-70B-o1 GGUF Quants This repository contains the GGUF quants for the [Llama-3.3-70B-o1](https://huggingface.co/codelion/Llama-3.3-70B-o1) model. You can use them for inference in local inference servers like ollama or llama.cpp