runtime error
Exit code: 1. Reason: 68 sha256=1776769f7ae3a8be3b31ec3a4c875ad1764da74be2d9b1751e5c01162ad0096f Stored in directory: /home/user/.cache/pip/wheels/59/ce/d5/08ea07bfc16ba218dc65a3a7ef9b6a270530bcbd2cea2ee1ca Successfully built flash-attn Installing collected packages: flash-attn Successfully installed flash-attn-2.7.4.post1 Folder already exists at: ./xcodec_mini_infer Changed working directory to: /home/user/app The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] Loading model... Downloading shards: 0%| | 0/3 [00:00<?, ?it/s][A Downloading shards: 33%|ββββ | 1/3 [00:10<00:20, 10.42s/it][A Downloading shards: 67%|βββββββ | 2/3 [00:21<00:10, 10.87s/it][A Downloading shards: 100%|ββββββββββ| 3/3 [00:27<00:00, 8.60s/it][A Downloading shards: 100%|ββββββββββ| 3/3 [00:27<00:00, 9.17s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 74, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4105, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1525, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1668, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...