--- datasets: - mahfoos/Patient-Doctor-Conversation language: - en library_name: peft tags: - finetuned - medical - chatbot - peft - qlora --- This is a finetuned model I have made using QLora (PEFT) with `lora_r` = 64 (help decide the matrice size where the weights will change) and `lora_alpha` = 16 (defines the intensity with which the lora will change the paramters) which are the majorly important hyperparameters of QLora which I learnt recently with the help of Krish Naik.