TESTING...TESTING! The quantization used on this model may reduce quality, but it is hopefully faster, and maybe usable with 4GB VRAM. TESTING...

hellork/BlenderLLM-IQ3_XXS-GGUF

This model was converted to GGUF format from FreedomIntelligence/BlenderLLM using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Compile to take advantage of Nvidia CUDA hardware:

git clone https://github.com/ggerganov/llama.cpp.git
cd llama*
# look at docs for other hardware builds or to make sure none of this has changed.

cmake -B build -DGGML_CUDA=ON
CMAKE_ARGS="-DGGML_CUDA=on" cmake --build build --config Release # -j6 (optional: use a number less than the number of cores)

# If your version of gcc is > 12 and it gives errors, use conda to install gcc-12 and activate it.
# Run the above cmake commands again.
# Then run conda deactivate and re-run the last line once more to link the build outside of conda.

# Add the -ngl 33 flag to the commands below to take advantage of all the GPU layers.
# If it uses too much GPU and crashes, use some lower number.

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "Build a Blender model of Starship"

Server:

llama-server --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "Write a Blender script to construct a Tie Fighter"

or

./llama-server --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -c 2048
Downloads last month
26
GGUF
Model size
7.62B params
Architecture
qwen2

3-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for hellork/BlenderLLM-IQ3_XXS-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(1)
this model

Dataset used to train hellork/BlenderLLM-IQ3_XXS-GGUF