Article
Introducing multi-backends (TRT-LLM, vLLM) support for Text Generation Inference
โข
63
from optimum.onnxruntime import ORTModelForSequenceClassification
# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)
Encrypt, filter, and decrypt images using FHE
Encrypt tweets for safe sentiment analysis
Encrypt and predict health conditions
Anonymize text using FHE
Predict credit card approval using encrypted data