prnshv commited on
Commit
d534226
·
verified ·
1 Parent(s): 74b554a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -9
README.md CHANGED
@@ -1,23 +1,50 @@
1
  ---
2
- base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
- - unsloth
7
  - mistral
8
- - trl
9
- - sft
10
  license: apache-2.0
11
  language:
12
  - en
13
  ---
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** prnshv
 
 
18
  - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
6
  - mistral
 
 
7
  license: apache-2.0
8
  language:
9
  - en
10
  ---
11
 
12
+ # Model Card for ORANSight Mistral-12B (Nemo)
13
 
14
+ This model belongs to the first release of the ORANSight family of models.
15
+
16
+ - **Developed by:** NextG lab@ NC State
17
  - **License:** apache-2.0
18
+ - **Context Window:** 128K
19
+ - **Fine Tuning Framework:** Unsloth
20
+
21
+ ### Generate with Transformers
22
+ Below is a quick example of how to use the model with Hugging Face Transformers:
23
+
24
+ ```python
25
+ from transformers import pipeline
26
+
27
+ # Example query
28
+ messages = [
29
+ {"role": "system", "content": "You are an O-RAN expert assistant."},
30
+ {"role": "user", "content": "Explain the E2 interface."},
31
+ ]
32
+
33
+ # Load the model
34
+ chatbot = pipeline("text-generation", model="prnshv/ORANSight_Mistral_Nemo_Instruct")
35
+ result = chatbot(messages)
36
+ print(result)
37
+ ```
38
 
39
+ ### Coming Soon
40
+ A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.
41
 
42
+ ```bibtex
43
+ @article{gajjar2024oran,
44
+ title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
45
+ author={Gajjar, Pranshav and Shah, Vijay K},
46
+ journal={arXiv preprint arXiv:2407.06245},
47
+ year={2024}
48
+ }
49
+ ```
50
+ ---