A newer version of this model is available: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit

Model Card for outputs

This model is a fine-tuned version of unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit. It has been trained using TRL.

Quick start

from transformers import pipeline

from huggingface_hub import login

login("hf_*********************")  # استبدل `hf_your_token_here` بالتوكن الخاص بك من https://huggingface.co/settings/tokens



question =  """ترجم النص التالي الي اللهجة الحجازية من فضلك يا بلال : انا اسمي لؤي عبد السلام أسكن  بمصر مدينة الاسكندرية في مصر أدرس ذكاء اصطناعي"""

# Define a system prompt under prompt_style 
prompt_style = """Below is an instruction that describes a task, paired with an input that provides further context. 
Write a response that appropriately completes the request. 
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response  .

### Instruction:
You are a arabic chatbot called belal with advanced knowledge in saudi arabia culture and can Speak different Saudi dialects. 
Please answer the following client question in arabic . 

### Question:
{}

### Response:
<think>{}"""

prompt = prompt_style.format(question, "")



generator = pipeline("text-generation", model="loaiabdalslam/beetelware-saudi-R1-Distill-Llama-8B")
output = generator([{"role": "user", "content": prompt}], max_new_tokens=5000, return_full_text=True)[0]
print(output["generated_text"])


output :
[{'role': 'user',
'content': 'Below is an instruction that describes a task, paired with an input that provides further context.
\nWrite a response that appropriately completes the request. \nBefore answering,
 think carefully about the question and create a step-by-step chain of thoughts
to ensure a logical and accurate response  .\n\n### Instruction:\nYou are a arabic
chatbot called belal with advanced knowledge in saudi arabia culture and can Speak different Saudi dialects.
 \nPlease answer the following client question in arabic . \n\n### Question:\nترجم النص التالي الي اللهجة الحجازية من فضلك يا بلال : انا اسمي لؤي عبد السلام أسكن  بمصر مدينة الاسكندرية في مصر أدرس ذكاء اصطناعي\n\n###
Response:\n
<think>'},
 {'role': 'assistant',
'content': '\nOkay, I need to translate the given text into the Hijazi dialect. The user is asking me to help with this translation.
First, I should understand the original text. The name is "لؤي عبد السلام" which is "Loui Abdel Salem". Then, he lives in Egypt,
 specifically in Alexandria. He studies artificial intelligence. \n\n
Now, translating this into Hijazi dialect. I should make sure the names stay the same, as they are proper nouns.
 So, "لؤي" is "Loui", "عبد السلام" is "Abdel Salem". \n\nNext, his location: "أسكن بمصر مدينة الاسكندرية".
 In Hijazi dialect, "أَسكن" is "I live" or "I reside". "مصر" is Egypt, but in dialect, it might be "مصر" or "مصري".
 "مدينة" is city, so "مدينة الاسكندرية" is "Alexandria city". \n\nLastly, he studies "ذكاء اصطناعي" which is artificial intelligence.
 In dialect, it might be "ذكاء اصطناعي" or "ذكاء اصطناعي". \n\nPutting it all together: "أَسكن بمصر مدينة الاسكندرية في مصر أدرس ذكاء اصطناعي".
\n\nI think that\'s the correct translation. I should double-check each part to ensure accuracy.
Names should remain as they are, locations should be correctly translated, and technical terms like "ذكاء اصطناعي"
should stay the same or adjust if needed. \n\nYes, this seems accurate. I\'ll present it as the response.\n</think>
\n\n
أَسكن بمصر مدينة الاسكندرية في مصر أدرس ذكاء اصطناعي'}]

Training procedure

Open In Colab

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.14.0
  • Transformers: 4.43.4
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.2.0
  • Tokenizers: 0.19.1

FOR BUSSINESS INQUIRE

CONTACT US : https://beetleware.com/

Citations

@loaiiabdalslam (Beetlware) @hamdy waleed (Beetlware)

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}

@misc {loai_abdalslam_2025,
    author       = { {loai abdalslam,hamdy waleed} },
    title        = { beetelware-saudi-R1-Distill-Llama-8B (Revision 03cfaf5) },
    year         = 2025,
    url          = { https://huggingface.co/loaiabdalslam/beetelware-saudi-R1-Distill-Llama-8B },
    doi          = { 10.57967/hf/4375 },
    publisher    = { Hugging Face }
}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train loaiabdalslam/beetelware-saudi-R1-Distill-Llama-8B

Spaces using loaiabdalslam/beetelware-saudi-R1-Distill-Llama-8B 3