SentenceTransformer

This is a sentence-transformers model trained on the geo_7k_cellxgene_3_5k_multiplets dataset. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: None tokens
  • Output Dimensionality: None dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: code

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): MMContextEncoder(
    (text_encoder): BertModel(
      (embeddings): BertEmbeddings(
        (word_embeddings): Embedding(28996, 768, padding_idx=0)
        (position_embeddings): Embedding(512, 768)
        (token_type_embeddings): Embedding(2, 768)
        (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (encoder): BertEncoder(
        (layer): ModuleList(
          (0-11): 12 x BertLayer(
            (attention): BertAttention(
              (self): BertSdpaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): BertSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): BertIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): BertOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
        )
      )
      (pooler): BertPooler(
        (dense): Linear(in_features=768, out_features=768, bias=True)
        (activation): Tanh()
      )
    )
    (text_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=768, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=2048, bias=True)
        (3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (omics_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=64, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=2048, bias=True)
        (3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-geo7k-cellxgene3.5k-pairs-cell_type")
# Run inference
sentences = [
    '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/A2Kgip3knb4xmFj/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/HHeBR7Q9QnLM85E/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/rkHBdRGpy7qAspj/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/KXJjqrsrjnPKD3b/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/sLBtSQxQ3HxiMyE/download"}}, "sample_id": "census_e84f2780-51e8-4cfa-8aa0-13bbfef677c7_184"}',
    "A 46-year old female's liver sample, specifically conventional dendritic cell type 1 (cDC1s) enriched in CD45+ cell suspension, with no reported liver-related diseases.",
    'Sample is an ON-bipolar cell derived from the peripheral region of the retina of a 60-year-old male with European self-reported ethnicity, mapped to GENCODE 24 reference annotation.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.4943

Binary Classification

Metric Value
cosine_accuracy 0.901
cosine_accuracy_threshold 0.8416
cosine_f1 0.8594
cosine_f1_threshold 0.7717
cosine_precision 0.8086
cosine_recall 0.9171
cosine_ap 0.8752
cosine_mcc 0.786

Training Details

Training Dataset

geo_7k_cellxgene_3_5k_multiplets

Evaluation Dataset

geo_7k_cellxgene_3_5k_multiplets

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy cosine_ap
-1 -1 - - 0.5029 -
0.5076 100 6.008 9.9084 0.5057 -
1.0152 200 4.7386 10.5698 0.4943 -
0.5076 100 4.3879 11.5229 0.4943 -
1.0152 200 4.1962 11.7110 0.5 -
1.5228 300 4.2736 12.5341 0.4971 -
2.0305 400 4.1793 13.1011 0.4943 -
-1 -1 - - - 0.3408
0.1692 100 0.1614 0.3496 - 0.3389
0.3384 200 0.1641 0.3579 - 0.3390
0.5076 300 0.1652 0.3592 - 0.3396
0.6768 400 0.1672 0.3696 - 0.3413
0.8460 500 0.1579 0.3591 - 0.3417
1.0152 600 0.1722 0.2388 - 0.3457
1.1844 700 0.1553 0.3597 - 0.3866
1.3536 800 0.1029 0.0675 - 0.6485
1.5228 900 0.059 0.0464 - 0.7094
1.6920 1000 0.0446 0.0357 - 0.7133
1.8613 1100 0.035 0.0286 - 0.7571
2.0305 1200 0.0304 0.0226 - 0.8048
2.1997 1300 0.0258 0.0293 - 0.7571
2.3689 1400 0.0226 0.0179 - 0.8204
2.5381 1500 0.0207 0.0160 - 0.8292
2.7073 1600 0.0198 0.0166 - 0.8152
2.8765 1700 0.0215 0.0157 - 0.8430
3.0457 1800 0.0183 0.0161 - 0.8544
3.2149 1900 0.0163 0.0138 - 0.8651
3.3841 2000 0.0163 0.0142 - 0.8696
3.5533 2100 0.0159 0.0129 - 0.8719
3.7225 2200 0.015 0.0129 - 0.8773
3.8917 2300 0.0157 0.0127 - 0.8752

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.43.4
  • PyTorch: 2.6.0
  • Accelerate: 0.33.0
  • Datasets: 2.14.4
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
    title={Dimensionality Reduction by Learning an Invariant Mapping},
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Evaluation results