🧠 Neura-4.1-Think
Neura-4.1-Think is a fine-tuned DeepSeek-R1-Distill-Qwen-1.5B model, designed for conversational AI, text generation, and reasoning tasks.
🚀 Model Details
- Base Model: DeepSeek-R1-Distill-Qwen-1.5B
- Fine-tuned on: Custom dataset with enhanced reasoning and conversational tuning.
- Purpose: General-purpose AI with improved creative writing and Q&A abilities.
🔥 Features
- Generates human-like text responses.
- Trained to recognize itself as "Neura-4.1-Think".
- Supports context-aware conversations.
- Provides informative and engaging answers.
🛠 How to Use
You can try the model directly on this page using the "Test Compute" button or use it in Python:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model
model_name = "FahadCEO7376/Neura-4.1-Think"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
# Generate text
def generate_text(prompt):
inputs = tokenizer(prompt, return_tensors="pt").to("cpu")
output = model.generate(**inputs, max_length=200, temperature=0.7, top_p=0.9, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Test the model
print(generate_text("Who are you?"))
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.