YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

About this model

これはUnslothによって不具合が修正されたphi-4を日本語が多く含まれるimatrixでgguf化したモデルです
This is a model of phi-4, which has had bugs fixed by Unsloth, converted to gguf with an imatrix that contains a lot of Japanese.

phi4は基本性能は高いのですがプロンプトの指示に従わない事があり、プロンプトエンジニアリングの重要性が高いです
Although phi4 has high basic performance, it does not always follow prompt instructions, so prompt engineering is very important.

How to use

このページの右上の「Use this model」ボタンに様々なサンプルが掲載されています
Various samples are available in the "Use this model" button on the top right of this page.

(1)llama.cpp simple cui client server style

Load and run the model:

llama-cli \
  --hf-repo "dahara1/unsloth-phi-4-gguf-japanese-imatrix" \
  --hf-file phi-15B-IQ3_M.gguf \
  -p "You are a helpful assistant" \
  --conversation

(2)ollama officieal page. https://ollama.com/

https://huggingface.co/docs/hub/ollama

example. ollama run hf.co/dahara1/unsloth-phi-4-gguf-japanese-imatrix:IQ3_XXS

Downloads last month
731
GGUF
Model size
14.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.