Model Information

The Qwen2.5-1.5B-NextJs-code is a quantized, fine-tuned version of the Qwen2.5-1.5B-Instruct model designed specifically for generating NextJs code.

  • Base model: Qwen/Qwen2.5-1.5B-Instruct

How to use

Starting with transformers version 4.44.0 and later, you can run conversational inference using the Transformers pipeline.

Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
def get_pipline():
    model_name = "nirusanan/Qwen2.5-1.5B-NextJs-code"

    tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
    tokenizer.pad_token = tokenizer.eos_token

    model = AutoModelForCausalLM.from_pretrained(
        model_name,
        torch_dtype=torch.float16,
        device_map="cuda:0",
        trust_remote_code=True
    )

    pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500)

    return pipe

pipe = get_pipline()
def generate_prompt(project_title, description):
    prompt = f"""Below is an instruction that describes a project. Write Nextjs 14 code to accomplish the project described below.

### Instruction:
Project:
{project_title}

Project Description:
{description}

### Response:
"""
    return prompt
prompt = generate_prompt(project_title = "Your NextJs project", description = "Your NextJs project description")
result = pipe(prompt)
generated_text = result[0]['generated_text']
print(generated_text.split("### End")[0])
Downloads last month
14
Safetensors
Model size
1.54B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nirusanan/Qwen2.5-1.5B-NextJs-code

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(148)
this model
Quantizations
1 model