How to run this model on mobile device
5
#9 opened about 7 hours ago
by
sadaqathunzai
NVIDIA Triton Inference Server Compatibility?
#8 opened about 17 hours ago
by
RohanAdwankar
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/U-g0qvaiBm8XwQxg96y6t.png)
Missing file when using q4f16, q4, int8, uint8, and others via transformers.js
#7 opened 8 days ago
by
sroussey
Compatibility with transformers in Python
#6 opened 13 days ago
by
Fractalapps
q4f16 ONXX model issue
3
#5 opened 15 days ago
by
mrniamster
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64398666eb7c5616ef3f6a07/grrM2EBzQU6l5LsJFssOH.jpeg)