Unable to Generate Non-Normalized Embeddings with Sentence-Transformers

#95
by ArthurGprog - opened

I've encountered an interesting behavior when working with the Sentence-Transformers library. When using model.encode(sentences), the output consistently produces L2-normalized embeddings. While this aligns with the documentation's specification regarding embedding normalization, I've found that setting the parameter normalize_embeddings=False does not override this behavior as one might expect.

In contrast, when using the Transformers library directly, I am able to obtain non-normalized embeddings.

I'm curious whether this is intended functionality specific to Sentence-Transformers, or if there might be an alternative approach to obtaining non-normalized embeddings within the library. Any insights I am grateful for.

Sign up or log in to comment