Finetuned Model For My Thesis: Design And Implementation Of An Adaptive Virtual Intelligent Teaching Assistant Based On Supervised Fine-tuning Of A Pre-trained Large Language Model
Model Name: CodeOptimus - Adaptive Supervised Instruction Fine-tuning Mistral 7B Instruct using qLora.
Prerequisites For Reproduction
- GPU: Requires powerful GPUs - I used 7 Nvidia A100s.
- Train Time: 1 week.
- RAG Module: Updates the knowledge base of the model in real-time with adaptive features learned from conversations with the model over time..
- Python Packages: Install requirements.txt.
- Dataset: Download code_instructions_122k_alpaca_style plus some custom curated dataset
- Mistra-7B-Instruct-v0.1: Download mistralai/Mistral-7B-Instruct-v0.1 pytorch bin weights
- Realistic 3D Intelligent Persona/Avatar (Optional): For this I'm using soulmachine's digital humans.
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.