Luna Model

This document describes the use and functionalities of the personal model named Luna, which was trained based on the Phi-3 model. This model was developed for specific tasks as detailed below.

Table of Contents

Introduction

The Luna Model is a customized version of the Phi-3 model tailored for specific tasks such as text generation. This model leverages the capabilities of the Phi-3 architecture to provide efficient and accurate results for various natural language processing tasks.

Requirements

  • Ollama

Installation

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Usage

Create Modelfile

touch Modelfile

Modelfile content

FROM ./models/luna-4b-v0.5.gguf

PARAMETER temperature 1
"""

Load the model

ollama create luna -f ./Modelfile

Run Model

ollama run luna

Usage Python

import ollama

stream = ollama.chat(
    model='llama3',
    messages=[{'role': 'user', 'content': 'Who are you?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)
Downloads last month
6
GGUF
Model size
3.82B params
Architecture
phi3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.