Mobile-Bert fine-tuned on Squad V2 dataset
This is based on mobile bert architecture suitable for handy devices or device with low resources.
usage
using transformers library first load model and Tokenizer
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "aware-ai/mobilebert-squadv2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
use question answering pipeline
qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer)
QA_input = {
'question': 'your question?',
'context': '. your context ................ '
}
res = qa_engine (QA_input)
- Downloads last month
- 168
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.