Llama3.1-SuperDeepFuse-CrashCourse12K
Llama3.1-SuperDeepFuse-CrashCourse12K is an 8B parameter language model based on Llama3.1-SuperDeepFuse and further fine-tuned on agentlans/crash-course.
Model Details
- Base Model: Llama3.1-SuperDeepFuse (8B parameters)
- Fine-tuning Dataset: 12 000 samples from agentlans/crash-course (containing samples from 10 high-quality instruct datasets)
- Model Type: Instruction-tuned language model
- Language(s): Multilingual
- License: Follows standard Llama 3.1 usage terms
Training Procedure
Fine-tuning
- Method: LoRA (Low-Rank Adaptation)
- Optimizer: AdamW
- Learning Rate: 5e-5
- Batch Size: 2 per device
- Gradient Accumulation Steps: 8
- Training Epochs: 1
- Max Sequence Length: 2048
- LoRA Configuration:
- Rank: 8
- Alpha: 16
- Dropout: 0.5
- Target: all layers
- Quantization: 4-bit (bitsandbytes)
- Precision: BF16
- Other Techniques: NEFTune (noise alpha: 5), RS-LoRA
Performance and Limitations
This model potentially offers:
- Enhanced multi-task reasoning
- Improved performance in mathematics and coding tasks
- Better instruction-following abilities
However:
- Performance may be limited compared to larger model variants
- Can produce misleading or incorrect outputs
- Outputs should be independently verified for critical applications
Additional Information
- For the original model, see agentlans/Llama3.1-SuperDeepFuse
- For the base Llama 3.1 model, including training data and model architecture, refer to the original Llama 3.1 model card.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 27.93 |
IFEval (0-Shot) | 71.87 |
BBH (3-Shot) | 31.83 |
MATH Lvl 5 (4-Shot) | 17.67 |
GPQA (0-shot) | 8.39 |
MuSR (0-shot) | 8.60 |
MMLU-PRO (5-shot) | 29.24 |
- Downloads last month
- 17
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for agentlans/Llama3.1-SuperDeepFuse-CrashCourse12K
Dataset used to train agentlans/Llama3.1-SuperDeepFuse-CrashCourse12K
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard71.870
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard31.830
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard17.670
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.390
- acc_norm on MuSR (0-shot)Open LLM Leaderboard8.600
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.240