--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb-2 language: - fo base_model: - HuggingFaceTB/SmolLM2-135M-Instruct library_name: peft pipeline_tag: text-generation --- This model is a SmolLM2-135M-Instruct model fine-tuned on the Faroese portion of Fineweb-2. It is intended for my research and has not been evaluated more broadly yet. LoRA setup: - Rank: 256 - Alpha: 512 - Target modules: ["up_proj", "down_proj", "gate_proj", "o_proj"] Training: - 5 Epochs - Learning rate: 8e-4 - LR scheduler: Cosine - Warmup ratio: 0.05 - Batch size: 1 - 4 A100 (40GB) GPUs - Gradient accumulation steps: 64 - Effective batch size: 256 - Max. context length: 8192 tokens