base_model: unsloth/mistral-7b-bnb-4bit library_name: peft

Model Card for Mistral 7B Fine-tuned by Erkinbek Niiazbek uulu

Model Details

Model Description

This is a fine-tuned version of the Mistral 7B model developed by Erkinbek Niiazbek uulu for specific use cases. The model was fine-tuned using LoRA (Low-Rank Adaptation) techniques and is optimized for lightweight deployment. The base model used is unsloth/mistral-7b-bnb-4bit.

  • Developed by: Erkinbek Niiazbek uulu
  • Contact Email: [email protected]
  • Base Model: unsloth/mistral-7b-bnb-4bit
  • Library Name: PEFT
  • Language(s): Multilingual (including Kyrgyz)
  • License: [Specify your license type, e.g., Apache 2.0, MIT]
  • Fine-tuned from model: unsloth/mistral-7b-bnb-4bit

Uses

Direct Use

This fine-tuned model is designed for tasks such as:

  • Multilingual question answering
  • Text summarization
  • Natural language generation

Downstream Use

This model can be further fine-tuned for domain-specific applications.

Out-of-Scope Use

This model is not intended for generating harmful, offensive, or unethical content.


Bias, Risks, and Limitations

Recommendations

While this model has been fine-tuned for specific tasks, users should be cautious of potential biases in the output. It is recommended to review the outputs critically, especially when used in sensitive applications.


How to Get Started with the Model

To load the model, you can use the following code:

# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "Зекет деген эмне?", # instruction
        "", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1024)
### Framework versions

- PEFT 0.14.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.