llama2-piratelora-13b

This repo contains a Low-Rank Adapter (LoRA) for llama 2 13b (16 float) fit on a simple dataset comprised of thousands of pirate phrases, conversation pieces, and obscura. The purpose behind the generation of this lora was to determine whether enforcement of dialect and diction was possible throug the LoRa fine tuning method. Results were less than perfect, but the LoRa does seem to push the model to stick to maritime and nautical topics when spontaneously prompted to generate.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.