Running the model with vLLM does not actually work
This blog post: https://unsloth.ai/blog/deepseekr1-dynamic
Claims that the 1.58bpw model can be run with vLLM. However, this is not the case. Attempting to run the model with following command following the vLLM documentation:
vllm serve /raid/models/DeepSeek-R1-UD-IQ1_S.gguf --tokenizer deepseek-ai/DeepSeek-R1
Produces following error message:
ValueError: GGUF model with architecture deepseek2 is not supported yet.
whoopsies ok we'll remove it
This blog post: https://unsloth.ai/blog/deepseekr1-dynamic
Claims that the 1.58bpw model can be run with vLLM. However, this is not the case. Attempting to run the model with following command following the vLLM documentation:
vllm serve /raid/models/DeepSeek-R1-UD-IQ1_S.gguf --tokenizer deepseek-ai/DeepSeek-R1
Produces following error message:
ValueError: GGUF model with architecture deepseek2 is not supported yet.
btw even when u merged it using llama.cpp it didnt work?
I have not tried running it with llama.cpp. I was hoping to use vLLM as it seemed to be the only library capable of running a gguf model with tensor parallelism.
I am also trying to get this working on my end. I can reproduce this issue running the standard vllm/vllm-openai:latest Docker image (see docker-compose.yaml file copied below).
I've been trying to get a docker compose setup for easy local inference with the 1.58bpw DeepSeek-R1-UD-IQ1_S GGUF model. I have 128G Quad Channel DDR4 and 4x3090 for a total of 224G RAM+VRAM running under Ubuntu, so I also thought vLLM seemed like the ideal way to host DeepSeek R1 locally after reading the Blog post. But so far, no luck. Any help getting to the bottom of this would be greatly appreciated.
I did start with combining the GGUF files per the docs:
$ ./llama-gguf-split --merge /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf -> /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf merged from 3 split with 1025 tensors.
After that, I run the below docker-compose.yaml file (just put it in an empty directory) with docker compose up
docker-compose.yaml
version: "3.8"
services:
vllm:
image: vllm/vllm-openai:latest
# Use host networking so the service can be accessed via the host’s network.
network_mode: host
# Use host IPC (helps with PyTorch shared memory usage).
ipc: host
# If your Docker environment supports GPU device reservations in compose:
deploy:
resources:
reservations:
devices:
- driver: "nvidia"
count: "all"
capabilities: ["gpu"]
# Mount your GGUF file from the host machine into the container.
# Adjust the path on the host side as needed (e.g., ./models/).
volumes:
- /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf:/models/
# The 'command' section calls vLLM in OpenAI-compatible "serve-openai" mode.
# - Point --model to the single-file GGUF model in your mounted directory.
# - Optionally specify --tensor-parallel-size <N> if you want multiple GPUs.
command: >
--model "/models"
--port 5000
--tensor-parallel-size 4
--max-model-len 32768
--enforce-eager
Output:
vllm-1 | Traceback (most recent call last):
vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
vllm-1 | self.run()
vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
vllm-1 | self._target(*self._args, **self._kwargs)
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
vllm-1 | raise e
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
vllm-1 | engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args
vllm-1 | engine_config = engine_args.create_engine_config(usage_context)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1047, in create_engine_config
vllm-1 | model_config = self.create_model_config()
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 972, in create_model_config
vllm-1 | return ModelConfig(
vllm-1 | ^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 282, in __init__
vllm-1 | hf_config = get_config(self.model, trust_remote_code, revision,
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 201, in get_config
vllm-1 | config_dict, _ = PretrainedConfig.get_config_dict(
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 591, in get_config_dict
vllm-1 | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 682, in _get_config_dict
vllm-1 | config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_gguf_pytorch_utils.py", line 387, in load_gguf_checkpoint
vllm-1 | raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
vllm-1 | ValueError: GGUF model with architecture deepseek2 is not supported yet.
vllm-1 exited with code 1
Hopefully this is reproduceable enough for folks to see if anyone else hits the same issues. If anyone has a fix, or can verify if this dynamic quant R1 model is supposed to work with vLLM, I'd greatly appreciate it.
Looks like this is also being discussed on the vLLM GitHub repo here.
I am also trying to get this working on my end. I can reproduce this issue running the standard vllm/vllm-openai:latest Docker image (see docker-compose.yaml file copied below).
I've been trying to get a docker compose setup for easy local inference with the 1.58bpw DeepSeek-R1-UD-IQ1_S GGUF model. I have 128G Quad Channel DDR4 and 4x3090 for a total of 224G RAM+VRAM running under Ubuntu, so I also thought vLLM seemed like the ideal way to host DeepSeek R1 locally after reading the Blog post. But so far, no luck. Any help getting to the bottom of this would be greatly appreciated.
I did start with combining the GGUF files per the docs:
$ ./llama-gguf-split --merge /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf -> /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf merged from 3 split with 1025 tensors.
After that, I run the below docker-compose.yaml file (just put it in an empty directory) with
docker compose up
docker-compose.yaml
version: "3.8" services: vllm: image: vllm/vllm-openai:latest # Use host networking so the service can be accessed via the host’s network. network_mode: host # Use host IPC (helps with PyTorch shared memory usage). ipc: host # If your Docker environment supports GPU device reservations in compose: deploy: resources: reservations: devices: - driver: "nvidia" count: "all" capabilities: ["gpu"] # Mount your GGUF file from the host machine into the container. # Adjust the path on the host side as needed (e.g., ./models/). volumes: - /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf:/models/ # The 'command' section calls vLLM in OpenAI-compatible "serve-openai" mode. # - Point --model to the single-file GGUF model in your mounted directory. # - Optionally specify --tensor-parallel-size <N> if you want multiple GPUs. command: > --model "/models" --port 5000 --tensor-parallel-size 4 --max-model-len 32768 --enforce-eager
Output:
vllm-1 | Traceback (most recent call last): vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap vllm-1 | self.run() vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run vllm-1 | self._target(*self._args, **self._kwargs) vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine vllm-1 | raise e vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine vllm-1 | engine = MQLLMEngine.from_engine_args(engine_args=engine_args, vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args vllm-1 | engine_config = engine_args.create_engine_config(usage_context) vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1047, in create_engine_config vllm-1 | model_config = self.create_model_config() vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 972, in create_model_config vllm-1 | return ModelConfig( vllm-1 | ^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 282, in __init__ vllm-1 | hf_config = get_config(self.model, trust_remote_code, revision, vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 201, in get_config vllm-1 | config_dict, _ = PretrainedConfig.get_config_dict( vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 591, in get_config_dict vllm-1 | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 682, in _get_config_dict vllm-1 | config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"] vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-1 | File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_gguf_pytorch_utils.py", line 387, in load_gguf_checkpoint vllm-1 | raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.") vllm-1 | ValueError: GGUF model with architecture deepseek2 is not supported yet. vllm-1 exited with code 1
Hopefully this is reproduceable enough for folks to see if anyone else hits the same issues. If anyone has a fix, or can verify if this dynamic quant R1 model is supposed to work with vLLM, I'd greatly appreciate it.
oh amazing this is really interesting
FWIW, it appears I get the same error on the SGLang side trying to run the same DeepSeek-R1-UD-IQ1_S GGUF model. Seems the underlying issue is lack of DeepSeek GGUF compatibility in the transformers package.
sglang | [2025-02-03 10:32:36] server_args=ServerArgs(model_path='/models', tokenizer_path='/models', tokenizer_mode='auto', load_format='gguf', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, quantization='gguf', context_length=None, device='cuda', served_model_name='/models', chat_template=None, is_embedding=False, revision=None, skip_tokenizer_init=False, host='0.0.0.0', port=5000, mem_fraction_static=0.88, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=2048, max_prefill_tokens=16384, schedule_policy='lpm', schedule_conservativeness=1.0, cpu_offload_gb=0, prefill_only_one_req=False, tp_size=1, stream_interval=1, stream_output=False, random_seed=798833006, constrained_json_whitespace_pattern=None, watchdog_timeout=300, download_dir=None, base_gpu_id=0, log_level='info', log_level_http=None, log_requests=False, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_pth='sglang_storage', enable_cache_report=False, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, attention_backend='flashinfer', sampling_backend='flashinfer', grammar_backend='outlines', speculative_draft_model_path=None, speculative_algorithm=None, speculative_num_steps=5, speculative_num_draft_tokens=64, speculative_eagle_topk=8, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_jump_forward=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=8, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False)
sglang | Traceback (most recent call last):
sglang | File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
sglang | return _run_code(code, main_globals, None,
sglang | File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
sglang | exec(code, run_globals)
sglang | File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 14, in <module>
sglang | launch_server(server_args)
sglang | File "/sgl-workspace/sglang/python/sglang/srt/entrypoints/http_server.py", line 491, in launch_server
sglang | tokenizer_manager, scheduler_info = _launch_subprocesses(server_args=server_args)
sglang | File "/sgl-workspace/sglang/python/sglang/srt/entrypoints/engine.py", line 426, in _launch_subprocesses
sglang | tokenizer_manager = TokenizerManager(server_args, port_args)
sglang | File "/sgl-workspace/sglang/python/sglang/srt/managers/tokenizer_manager.py", line 134, in __init__
sglang | self.model_config = ModelConfig(
sglang | File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 53, in __init__
sglang | self.hf_config = get_config(
sglang | File "/sgl-workspace/sglang/python/sglang/srt/hf_transformers_utils.py", line 67, in get_config
sglang | config = AutoConfig.from_pretrained(
sglang | File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 1054, in from_pretrained
sglang | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
sglang | File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 591, in get_config_dict
sglang | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
sglang | File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 682, in _get_config_dict
sglang | config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
sglang | File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_gguf_pytorch_utils.py", line 387, in load_gguf_checkpoint
sglang | raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
sglang | ValueError: GGUF model with architecture deepseek2 is not supported yet.
Here's my docker compose yaml file as well, though this is at least provided in the SGLang repo as well (though I've modified mine a bit).
compose.yaml
services:
sglang:
image: lmsysorg/sglang:v0.4.2.post1-cu124
container_name: sglang
volumes:
- /models/DeepSeek-R1-UD-IQ1_S.gguf:/models/
- ${HOME}/.cache/huggingface:/root/.cache/huggingface
# If you use modelscope, you need mount this directory
# - ${HOME}/.cache/modelscope:/root/.cache/modelscope
restart: unless-stopped
network_mode: host
# Or you can only publish port 5000
ports:
- 5000:5000
environment:
HF_TOKEN: <secret>
# if you use modelscope to download model, you need set this environment
# - SGLANG_USE_MODELSCOPE: true
entrypoint: python3 -m sglang.launch_server
command:
--model-path /models
--host 0.0.0.0
--port 5000
ulimits:
memlock: -1
stack: 67108864
ipc: host
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:5000/health || exit 1"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
For anyone else hitting these issues who doesn't want to deal with the packaging craziness of Ollama, doesn't want to manually compile llama.cpp, and wants something that will run in a Docker container, Text generation web UI appears to be the best option for local inference on DeepSeek GGUF model weights. They updated their llama-cpp-python package dependency a few days ago, which means they now include the latest llama.cpp support for DeepSeek GGUF model weights. It's not something I would use in production (it lacks parallel request processing among other things), but it's fine for playing around with these models on homelab hardware until Huggingface updates the transformers package with DeepSeek GGUF support.
I'm really looking forward to getting DeepSeek GGUF model weights support in transformers
. Sounds like it's one of the main blockers to running production-grade hosting of DeepSeek models on local homelab hardware at the moment. If any Huggingface devs are reading this, please prioritize transformers
support for the DeepSeek GGUF architecture if possible.