You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModel
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

model = AutoModelForCausalLM.from_pretrained("orionweller/test-flex-gpt", trust_remote_code=True)
model = model.to(device)
tokenizer = AutoTokenizer.from_pretrained("orionweller/test-flex-gpt", trust_remote_code=True)

# test it out and encode some text
prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt").input_ids
# put the input ids on the right device
inputs = inputs.to(device)
outputs = model.generate(inputs, max_new_tokens=5, do_sample=True, top_p=0.95)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.