Import Error when trying to use 4-bit or 8-bit quantization on colab
#182
by
rocketpenguin25603
- opened
I was using google colab with T4 GPU and trying to load this model using BitsAndBytesConfig(load_in_4bit=True), but it keeps throwing ImportError: Using bitsandbytes
8-bit quantization requires the latest version of bitsandbytes: pip install -U bitsandbytes
I checked the version of bitsandbytes and it is 0.45.0, the latest version released a few days ago. I do not know how to resolve this. Please let me know if you need any other details!