Solshine commited on
Commit
1bf6fca
·
verified ·
1 Parent(s): 75e4aff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -8,15 +8,35 @@ tags:
8
  - unsloth
9
  - mistral
10
  - gguf
 
 
 
 
 
11
  base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
 
 
12
  ---
13
 
14
  # Uploaded model
15
 
16
- - **Developed by:** Solshine
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - unsloth
9
  - mistral
10
  - gguf
11
+ - agriculture
12
+ - farming
13
+ - climate
14
+ - biology
15
+ - agritech
16
  base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
17
+ datasets:
18
+ - CopyleftCultivars/Natural-Farming-Real-QandA-Conversations-Q1-2024-Update
19
  ---
20
 
21
  # Uploaded model
22
 
23
+ - **Developed by:** Caleb DeLeeuw, Copyleft Cultivars
24
  - **License:** apache-2.0
25
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
26
 
 
27
 
28
+ Background:
29
+ Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model. Following this, the Gemma version was released.
30
+
31
+ Updates for this model:
32
+ We then revisited the data, adding four additional months of real-world in-field data from hundreds of users which was then editted by a domain expert in regenerative farming and natural farming (approximately 2,000 instruct examples.) This was combined with a small portion of synthetic datasets and semisynthetic datasets related to regenerative agriculture and natural farming, including some non-english language samples. The results were far superior to our previous releases of Natural Farming Gemma and Mistral fine-tunes. This Natural Farming Mistral 7B was the best scoring in our prelim benchmarking, and so it was converted to GGUF and loaded onto Hugging Face Hub in hopes it will help farmers everywhere and inspire future works.
33
+
34
+ Shout out to roger j (bhugxer) for help with the dataset and training framework.
35
+
36
+ Testing and further compiling to integrate into on-device app interfaces are ongoing. This project was done for Copyleft Cultivars, a nonprofit, in partnership with Open Nutrient Project and Evergreen State College. This project serves to democratize access to farming knowledge and support the protection of vulnerable plants.
37
+
38
+ This is V1 beta. It runs locally on Ollama with some expirimental configuring so you can use it off the grid and places where internet is not accessible (ie most farms I've been on.)
39
+
40
+ This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
41
+
42
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)