LoRA weights
#6
by
NohTow
- opened
Hello,
The paper mention that the models are trained using LoRAs with a rank of 8.
Would it be possible to release the actual LoRA weights and configuration and not the merged version one, as we did for MonoQwen? It allows to use the mutualize the base backbone of Qwen to different models (embeddings, reranking, generation, ...).
We tried to perform LoRA extraction with a target rank of 8, but the results are below what is expected.
Thank you!