running on local machine

#19
by saidavanam - opened

I have laptop which has graphics gtx 1050 in it.My question is whether am i able to run this model on my machine locally or not??

Ask feom Deepseek chat and get a good guideline

One can run it locally by use of the llama model

so can we run on single 3090 TI. Or I have 3 x 3090 but connected with lan can a I distrubute with ray?

Another option that would work perfectly well, is to use the inteference to run the model. try lm studio

Model arch is Qwen2, only chat template is different.
If ever run Qwen2 models, so it does work.

Sign up or log in to comment