requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf/resolve/main/chat_template.jinja

#14
by BavariaForest - opened

Dear authors, thank you for your contribution. While using the model for inference, I encountered the following issue, which may be due to some missing files on Hugging Face. Could you please look into this and resolve the problem? Thank you!

Traceback (most recent call last):
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf/resolve/main/chat_template.jinja

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 860, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 967, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1482, in _raise_on_head_call_error
raise head_call_error
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1374, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1294, in get_hf_file_metadata
r = _request_wrapper(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 278, in _request_wrapper
response = _request_wrapper(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 302, in _request_wrapper
hf_raise_for_status(response)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-679ba566-21bed48d7e7f4b9409c8330d;e65d1ca9-d03b-4976-b4f3-d687efd7f5d1)

Repository Not Found for url: https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf/resolve/main/chat_template.jinja.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid credentials in Authorization header

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/limiaoyu/projects/movie/test_llavanext_s2.py", line 34, in
processor = LlavaNextVideoProcessor.from_pretrained(model_id)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/transformers/processing_utils.py", line 975, in from_pretrained
processor_dict, kwargs = cls.get_processor_dict(pretrained_model_name_or_path, **kwargs)
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/transformers/processing_utils.py", line 661, in get_processor_dict
resolved_raw_chat_template_file = cached_file(
File "/home/limiaoyu/anaconda3/envs/llava_next/lib/python3.9/site-packages/transformers/utils/hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: llava-hf/LLaVA-NeXT-Video-7B-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>

Process finished with exit code 1

Llava Hugging Face org
edited 6 days ago

@BavariaForest very weird. Can you share your env with transformers-cli env and the inference script you used?

The file it is looking for should be optional, and thus should raise no errors tbh. For me the demo from model page is working on v4.48

@RaushanTurganbay I encountered the same issue when using the example script.

import av
import torch
import numpy as np
from huggingface_hub import hf_hub_download
from transformers import LlavaNextVideoProcessor, LlavaNextVideoForConditionalGeneration

model_id = "llava-hf/LLaVA-NeXT-Video-7B-hf"

model = LlavaNextVideoForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)

processor = LlavaNextVideoProcessor.from_pretrained(model_id)

def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (av.container.input.InputContainer): PyAV container.
indices (List[int]): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])

define a chat history and use apply_chat_template to get correctly formatted prompt

Each value in "content" has to be a list of dicts with types ("text", "image", "video")

conversation = [
{

    "role": "user",
    "content": [
        {"type": "text", "text": "Why is this video funny?"},
        {"type": "video"},
        ],
},

]

prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)

video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
container = av.open(video_path)

sample uniformly 8 frames from the video, can sample more for longer videos

total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / 8).astype(int)
clip = read_video_pyav(container, indices)
inputs_video = processor(text=prompt, videos=clip, padding=True, return_tensors="pt").to(model.device)

output = model.generate(**inputs_video, max_new_tokens=100, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))

The environment:

transformers-cli env

Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.

  • transformers version: 4.48.1
  • Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
  • Python version: 3.9.12
  • Huggingface_hub version: 0.28.0
  • Safetensors version: 0.5.2
  • Accelerate version: 1.3.0
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.4.0+cu121 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:
  • Using GPU in script?:
  • GPU type: NVIDIA RTX 6000 Ada Generation

Thank you for your response!

Llava Hugging Face org

Thanks for providing more info! This is a very interesting case, as I couldn't reproduce it using the same versions of the hub and transformers. The file it is looking for is not supposed to exist, and the error should have been skipped. But for some reason, code raised a different error type thus we didn't skip it

We'll investigate it, I suppose it has something to do with the environment

@RaushanTurganbay Thank you! I will try to figure it out.

Sign up or log in to comment