Model Card for Model ID
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
This version of Phi-2 is one with added Early Exit in order to accelerate inference. Each Early Exit was trained using self-supervised technique from model outputs.
Model Description
This model provides trained head to make Phi-2 a Early exit model.
- Developed by: Florian Valade
- Shared by: Florian Valade
- Model type: Text generation
- License: MIT
- Finetuned from model : https://huggingface.co/microsoft/phi-2
Model Sources
- Repository: [TBD]
- Paper: [TBD]
- Demo: [TBD]
Uses
When used as provided, the model does not use Early Exits. One needs to set head_thresholds in the configuration in order to use inference acceleration.
different head_thresholds for different ε :
ε | head_thresholds |
---|---|
0.4 | [1.0307843685150146, 0.8693032264709473, 0.6637287139892578, 0.3111608028411865] |
0.5 | [1.505380630493164, 1.5712471008300781, 1.1971790790557861, 0.6908178329467773] |
0.6 | [2.0270779132843018, 1.8969502449035645, 1.4789371490478516, 0.9875392913818359] |
0.7 | [2.506962537765503, 2.656052589416504, 1.924393653869629, 1.4434680938720703] |
0.8 | [3.3786778450012207, 2.568857192993164, 2.5665550231933594, 2.006620407104492] |
0.9 | [3.187114715576172, 3.442272663116455, 2.636230945587158, 2.460529088973999] |
When you have selected the thresholds you can use :
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("valcore/branchy_phi-2_base", trust_remote_code=True, device_map="cpu")
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-2")
model.eval()
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
# Put here the selected thresholds :
model.head_thresholds = torch.tensor([3.187114715576172, 3.442272663116455, 2.636230945587158, 2.460529088973999])
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
Citation [optional]
BibTeX:
TBD
Model Card Contact
Florian Valade
- Downloads last month
- 12