prithivMLmods commited on
Commit
978596f
·
verified ·
1 Parent(s): 615e247

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -14,6 +14,7 @@ tags:
14
  - llama
15
  - CoT
16
  - Thinker
 
17
  ---
18
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bMONeEIzYGnh7b7oppgBN.png)
19
 
@@ -21,15 +22,9 @@ tags:
21
 
22
  SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
23
 
24
- | **Notebook** | **Link** |
25
- |--------------|----------|
26
- | SmolLM-FT-360M | [SmolLM-FT-360M.ipynb](https://huggingface.co/datasets/prithivMLmods/FinetuneRT-Colab/blob/main/SmolLM-FT/SmolLM-FT-360M.ipynb) |
27
-
28
  ---
29
 
30
- ### How to use
31
-
32
- ### Transformers
33
  ```bash
34
  pip install transformers
35
  ```
@@ -254,10 +249,14 @@ After training, save the fine-tuned model and tokenizer to a local directory.
254
  | **Model** | [SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M) |
255
  | **Quantized Version** | [SmolLM2-CoT-360M-GGUF](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M-GGUF) |
256
 
 
 
 
 
257
  ### **Conclusion**
258
 
259
  Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.
260
 
261
  This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.
262
 
263
- ---
 
14
  - llama
15
  - CoT
16
  - Thinker
17
+ - LlamaForCausalLM
18
  ---
19
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bMONeEIzYGnh7b7oppgBN.png)
20
 
 
22
 
23
  SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
24
 
 
 
 
 
25
  ---
26
 
27
+ # How to use `Transformers`
 
 
28
  ```bash
29
  pip install transformers
30
  ```
 
249
  | **Model** | [SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M) |
250
  | **Quantized Version** | [SmolLM2-CoT-360M-GGUF](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M-GGUF) |
251
 
252
+ | **Notebook** | **Link** |
253
+ |--------------|----------|
254
+ | SmolLM-FT-360M | [SmolLM-FT-360M.ipynb](https://huggingface.co/datasets/prithivMLmods/FinetuneRT-Colab/blob/main/SmolLM-FT/SmolLM-FT-360M.ipynb) |
255
+
256
  ### **Conclusion**
257
 
258
  Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.
259
 
260
  This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.
261
 
262
+ ---