Original model card

Buy me a coffee if you like this project ;)

Description

GGML Format model files for This project.

inference


import ctransformers

from ctransformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")

manual_input: str = "Tell me about your last dream, please."


llm(manual_input, 
      max_new_tokens=256, 
      temperature=0.9, 
      top_p= 0.7)

Original model card

Model Details

This is an unofficial implementation of AlpaGasus-13B, which is a chat assistant trained by fine-tuning LLaMA on a Claud-filtered Alpaca dataset with around 5K triplets.

  • Developed by: gpt4life
  • Model type: An auto-regressive language model based on the transformer architecture.
  • License: Non-commercial license
  • Finetuned from model: LLaMA-13B.

Please see the original LLaMA license before using this model.

Model Sources

Training Details

AlpaGasus-13B is fine-tuned from LLaMA-13B with supervised instruction fine-tuning on the filtered Alpaca dataset.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.