Model Card

Devple is a fine-tuned model based on Llama 3.1 Instruct, designed for development tasks such as code generation and review, with a focus on the quality and safety of the generated code. Its synthetic dataset was generated using GPT-4o with Llama-3 (rejected).

Model Details

Model Description

Devple is a fine-tuned model based on Llama 3.1 Instruct. The model is built on a synthetic dataset. The main focus of the training was on development-related tasks such as code generation, code review, refactoring, etc., with particular emphasis on the quality and safety of the generated code.

Fine-tuning was done using ORPO. The dataset was generated using GPT-4o (chosen) and Llama-3 (rejected).

  • Language(s) (NLP): English, Russian
  • Finetuned from model: Llama 3.1 Instruct

Uses

Direct Use

import transformers
import torch

model_id = "Kkaastr/Devple-8B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Downloads last month
11
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Kkaastr/Devple-8B

Finetuned
(882)
this model
Quantizations
1 model