Barcenas R1 Qwen 1.5b

Basado en el deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B y entrenado con datos del dataset pinzhenchen/alpaca-cleaned-es

El objetivo de este modelo es tener un LLM de razonamiento en español como o1 o R1 y que tenga un tamaño pequeño accesible para ejecutar en la mayoría de equipos.


Barcenas R1 Qwen 1.5b

Based on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B and trained with data from the pinzhenchen/alpaca-cleaned-en dataset

The goal of this model is to have a reasoning LLM in Spanish as o1 or R1 and having a small size accessible to run on most computers.

Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽

Downloads last month
45
Safetensors
Model size
1.78B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Danielbrdz/Barcenas-R1-Qwen-1.5b

Finetuned
(51)
this model
Quantizations
2 models

Dataset used to train Danielbrdz/Barcenas-R1-Qwen-1.5b