Derify/ChemRanker-alpha-sim
This Cross Encoder is finetuned from Derify/ModChemBERT-IR-BASE using hard-negative triplets derived from Derify/pubchem_10m_genmol_similarity. Positive SMILES pairs are first filtered by quality and similarity constraints, then reduced to one strongest positive target per anchor molecule to create a high-signal training set for reranking. The model computes relevance scores for pairs of SMILES strings, enabling SMILES reranking and molecular semantic search.
For this variant, the positive selection objective is pure similarity ranking where each anchor keeps the highest-similarity candidate after filtering, rather than using a QED+similarity composite score. The quality stage uses strict inequality filtering (QED > 0.85, similarity > 0.5, with similarity also bounded below 1.0), and then keeps the top-scoring pair per anchor molecule.
Hard negatives are mined with Sentence Transformers using Derify/ChemMRL-beta as the teacher model and a TopK-PercPos-style margin setting based on NV-Retriever, with relative_margin=0.05 and max_negative_score_threshold = pos_score * percentage_margin. Training uses triplet-format samples with 5 mined negatives per anchor-positive pair and optimizes a multiple-negatives ranking objective, while reranking evaluation uses n-tuple samples with 30 mined negatives per query.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: Derify/ModChemBERT-IR-BASE
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Derify/pubchem_10m_genmol_similarity Mined Hard Negatives
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Transformers and Sentence Transformers libraries:
pip install -U "transformers>=4.57.1,<5.0.0"
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("Derify/ChemRanker-alpha-sim")
# Get scores for pairs of texts
pairs = [
['c1snnc1C[NH2+]Cc1cc2c(s1)CCC2', 'c1snnc1CCC[NH2+]Cc1cc2c(s1)CCC2'],
['c1sc2c(c1-c1nc(C3CCOC3)no1)CCCC2', 'O=CCc1noc(-c2csc3c2CCCC3)n1'],
['c1sc(C[NH2+]C2CC2)nc1C[NH+]1CCN2CCCC2C1', 'FC(F)[NH2+]Cc1nc(C[NH+]2CCN3CCCC3C2)cs1'],
['c1sc(CC[NH+]2CCOCC2)nc1C[NH2+]C1CC1', 'CCc1nc(C[NH2+]C2CC2)cs1'],
['c1sc(CC2CCC[NH2+]2)nc1C1CCCO1', 'c1sc(CC2CCC[NH2+]2)nc1C1CCCC1'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'c1snnc1C[NH2+]Cc1cc2c(s1)CCC2',
[
'c1snnc1CCC[NH2+]Cc1cc2c(s1)CCC2',
'O=CCc1noc(-c2csc3c2CCCC3)n1',
'FC(F)[NH2+]Cc1nc(C[NH+]2CCN3CCCC3C2)cs1',
'CCc1nc(C[NH2+]C2CC2)cs1',
'c1sc(CC2CCC[NH2+]2)nc1C1CCCC1',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Evaluated with
CrossEncoderRerankingEvaluatorwith these parameters:{ "at_k": 10 }
| Metric | Value |
|---|---|
| map | 0.4323 |
| mrr@10 | 0.6975 |
| ndcg@10 | 0.7034 |
Training Details
Training Dataset
GenMol Similarity Hard Negatives
- Dataset: GenMol Similarity Hard Negatives
- Size: 3,269,544 training samples
- Columns:
smiles_a,smiles_b, andnegative - Approximate statistics based on the first 1000 samples:
smiles_a smiles_b negative type string string string details - min: 19 characters
- mean: 33.64 characters
- max: 65 characters
- min: 20 characters
- mean: 34.16 characters
- max: 54 characters
- min: 19 characters
- mean: 33.28 characters
- max: 57 characters
- Samples:
smiles_a smiles_b negative c1sc2cc3c(cc2c1CC[NH2+]C1CC1)OCCO3FC(F)(F)[NH2+]CCc1csc2cc3c(cc12)OCCO3[NH3+]CCCc1cc2c(cc1C1CC1)OCO2c1sc2cc3c(cc2c1CC[NH2+]C1CC1)OCCO3FC(F)(F)[NH2+]CCc1csc2cc3c(cc12)OCCO3COc1cc2c(cc1C[NH2+]C1CCC1)OCO2c1sc2cc3c(cc2c1CC[NH2+]C1CC1)OCCO3FC(F)(F)[NH2+]CCc1csc2cc3c(cc12)OCCO3O=c1[nH]c2cc3c(cc2cc1CNC1CCCCC1)OCCO3 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid" }
Evaluation Dataset
GenMol Similarity Hard Negatives
- Dataset: GenMol Similarity Hard Negatives
- Size: 165,968 evaluation samples
- Columns:
smiles_a,smiles_b,negative_1,negative_2,negative_3,negative_4,negative_5,negative_6,negative_7,negative_8,negative_9,negative_10,negative_11,negative_12,negative_13,negative_14,negative_15,negative_16,negative_17,negative_18,negative_19,negative_20,negative_21,negative_22,negative_23,negative_24,negative_25,negative_26,negative_27,negative_28,negative_29, andnegative_30 - Approximate statistics based on the first 1000 samples:
smiles_a smiles_b negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 type string string string string string string string string string string string string string string string string string string string string string string string string string string string string string string string string details - min: 17 characters
- mean: 37.57 characters
- max: 96 characters
- min: 14 characters
- mean: 34.46 characters
- max: 70 characters
- min: 16 characters
- mean: 35.94 characters
- max: 77 characters
- min: 12 characters
- mean: 35.1 characters
- max: 77 characters
- min: 14 characters
- mean: 35.09 characters
- max: 81 characters
- min: 17 characters
- mean: 35.38 characters
- max: 74 characters
- min: 17 characters
- mean: 35.17 characters
- max: 70 characters
- min: 14 characters
- mean: 35.25 characters
- max: 84 characters
- min: 16 characters
- mean: 35.2 characters
- max: 77 characters
- min: 13 characters
- mean: 35.05 characters
- max: 80 characters
- min: 11 characters
- mean: 35.25 characters
- max: 90 characters
- min: 11 characters
- mean: 35.23 characters
- max: 74 characters
- min: 12 characters
- mean: 34.88 characters
- max: 60 characters
- min: 14 characters
- mean: 35.42 characters
- max: 66 characters
- min: 13 characters
- mean: 35.36 characters
- max: 69 characters
- min: 13 characters
- mean: 34.81 characters
- max: 77 characters
- min: 10 characters
- mean: 35.12 characters
- max: 77 characters
- min: 17 characters
- mean: 35.05 characters
- max: 69 characters
- min: 14 characters
- mean: 35.47 characters
- max: 72 characters
- min: 14 characters
- mean: 35.12 characters
- max: 65 characters
- min: 18 characters
- mean: 35.44 characters
- max: 72 characters
- min: 14 characters
- mean: 35.0 characters
- max: 64 characters
- min: 18 characters
- mean: 35.79 characters
- max: 81 characters
- min: 17 characters
- mean: 35.43 characters
- max: 67 characters
- min: 14 characters
- mean: 35.76 characters
- max: 68 characters
- min: 14 characters
- mean: 35.29 characters
- max: 62 characters
- min: 17 characters
- mean: 35.42 characters
- max: 66 characters
- min: 16 characters
- mean: 35.31 characters
- max: 83 characters
- min: 18 characters
- mean: 35.64 characters
- max: 77 characters
- min: 18 characters
- mean: 35.47 characters
- max: 77 characters
- min: 11 characters
- mean: 35.23 characters
- max: 65 characters
- min: 16 characters
- mean: 35.26 characters
- max: 77 characters
- Samples:
smiles_a smiles_b negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 c1snnc1C[NH2+]Cc1cc2c(s1)CCC2c1snnc1CCC[NH2+]Cc1cc2c(s1)CCC2c1snnc1CCC[NH2+]Cc1cc2c(s1)CCC2Cn1cc(C[NH2+]Cc2cc3c(s2)CCC3)nn1Cn1cc(CC[NH2+]Cc2cc3c(s2)CCC3)nn1Cc1cc(C[NH2+]Cc2csnn2)sc1CNC(=O)c1csc(C[NH2+]Cc2cc3c(s2)CCC3)c1Cc1cc(CC[NH2+]Cc2csnn2)sc1CIc1ccc(C[NH2+]Cc2cc3c(s2)CCC3)o1Cc1cc(C[NH2+]CCCCc2cc3c(s2)CCC3)c(C)s1c1ccc(C[NH2+]Cc2cc3c(s2)CCC3)cc1c1ncc(C[NH2+]Cc2csnn2)s1FC(F)c1csc(C[NH2+]Cc2cc3c(s2)CCC3)c1c1c(C[NH2+]CC2CC2)sc2c1CSCC2N#Cc1cc(F)cc(C[NH2+]Cc2cc3c(s2)CCC3)c1c1cc(C[NH2+]Cc2nc3c(s2)CCC3)no1CCc1ccc(C[NH2+]Cc2csnn2)s1NCc1csc(NCc2cc3c(s2)CCC3)n1CNH+Cc1nnc(-c2cc3c(s2)CCCC3)o1Fc1cc(C[NH2+]Cc2cc3c(s2)CCC3)ccc1BrFC(F)(F)C[NH2+]Cc1cc2c(s1)CCSC2c1cc(C[NH2+]Cc2cc3c(s2)CCC3)c[nH]1Cc1cc(C)c(CC[NH2+]Cc2cc3c(s2)CCC3)c(C)c1Oc1ccc(C[NH2+]Cc2cc3c(s2)CCC3)cc1BrO=C([O-])c1ccc(CC[NH2+]Cc2cc3c(s2)CCC3)s1c1c(C[NH2+]CC2CCCC2)sc2c1CCC2O=C([O-])c1ccc(C[NH2+]Cc2cc3c(s2)CCC3)s1COc1cc(C)cc(C[NH2+]Cc2cc3c(s2)CCC3)c1PSc1ccc(C[NH2+]Cc2csnn2)s1CCc1cnc(C[NH2+]Cc2csnn2)s1Clc1cc(C[NH2+]Cc2cc3c(s2)CCC3)ccc1Brc1c(C[NH2+]CC2CC2)sc2c1CCCCC2c1sc2c(c1-c1nc(C3CCOC3)no1)CCCC2O=CCc1noc(-c2csc3c2CCCC3)n1Nc1sc2c(c1-c1nc(C3CCOC3)no1)CCCC2Nc1sc2c(c1-c1nc(C3CCC3)no1)CCCC2c1c(-c2nc(C3CCCNC3)no2)sc2c1CCCCCC2Nc1sccc1-c1nc(C2CCCOC2)no1Nc1sc2c(c1-c1nc(C3CCCO3)no1)CCCC2Cc1csc(-c2nc(C3CCOCC3)no2)c1NCc1oc2c(c1-c1nc(C3CCOC3)no1)C(=O)CCC2c1c(-c2nc(C3C[NH2+]CCO3)no2)sc2c1CCCCC2O=C([O-])Nc1sc2c(c1-c1nc(C3CC3)no1)CCCC2c1cc2c(s1)CCCC2c1nc(C2CC2)no1CC(=O)N1CCCC(c2noc(-c3cc4c(s3)CCCCCC4)n2)C1Cc1cc(-c2nc([C@@H]3CCOC3)no2)c(N)s1c1cc2c(nc1-c1noc(C3CCCOC3)n1)CCCC2Nc1sccc1-c1nc(C2CCCC2)no1c1cc2c(nc1-c1noc(C3CCOCC3)n1)CCCC2[NH3+]C(c1noc(-c2cc3c(s2)CCCC3)n1)C1CC1c1cc2c(c(-c3nc(C4CCOCC4)no3)c1)CCCN2c1c(-c2nc(C3CC3)no2)nn2c1CCCC2CN1CC(c2noc(-c3cc4c(s3)CCCC4)n2)CC1=OOc1c(-c2nc(C3CCC(F)(F)C3)no2)ccc2c1CCCC2O=CCc1noc(-c2csc3c2CCCC3)n1Cc1cc(=O)c(-c2noc(C3CCCOC3)n2)c2n1CCC2O=C([O-])CNc1sc2c(c1-c1nc(C3CC3)no1)CCCC2c1cc(-c2noc(C3CCCOC3)n2)cs1Cn1nc(-c2nc(C3CCCO3)no2)c2c1CCCC2O=C(Nc1sc2c(c1-c1nc(C3CC3)no1)COCC2)C1=CCCCC1Cc1cscc1-c1noc(C2CCOCC2)n1CC1(C)CCCc2sc(N)c(-c3nc(C4CC4)no3)c21Clc1cc2c(c(-c3nc(C4CCOC4)no3)c1)OCC2Nc1sc2c(c1-c1nnc(C3CC3)o1)CCCC2c1sc(C[NH2+]C2CC2)nc1C[NH+]1CCN2CCCC2C1FC(F)[NH2+]Cc1nc(C[NH+]2CCN3CCCC3C2)cs1FC(F)[NH2+]Cc1nc(C[NH+]2CCN3CCCC3C2)cs1CC(C)[NH2+]Cc1nc(C[NH+]2CCC3CCCCC3C2)cs1CN1C2CCC1CNH+CC2Nc1nc(CC[NH+]2CCCN3CCCC3C2)cs1NCc1nc(C[NH+]2CCCC3CCCCC32)cs1CC1CNH+CCN1COc1csc(CN2CCCC3C[NH2+]CC32)n1CCc1nc(C[NH+]2CCCC3CCCCC32)cs1C[NH2+]Cc1csc(N2CC[NH+]3CCCC3C2)n1[NH3+]Cc1nc(C[NH+]2CCC3CCCCC32)cs1CC1CN2CCCCC2C[NH+]1Cc1csc(CC[NH3+])n1CCCc1nc(CN2CCCC2C2CCC[NH2+]2)cs1ClCCc1nc(CN2CCCC2C2CCC[NH2+]2)cs1c1cc(C[NH2+]C2CC2)c(C[NH+]2CCN3CCCCC3C2)o1O=C(Cc1nc(CCl)cs1)N1CCC[NH+]2CCCC2C1CC[NH2+]Cc1csc(N2CCC3C(CCC[NH+]3C)C2)n1c1sc(C[NH2+]C2CC2)nc1C[NH+]1CCCCC1[NH3+]Cc1nc(C[NH+]2CCCC2C2CCCC2)cs1Cc1csc(C[NH+]2CCC3C[NH2+]CC3C2)n1c1cc(C[NH+]2CCCN3CCCC3C2)nc(C2CC2)n1Cc1ccsc1C[NH2+]CCN1CCN2CCCC2C1c1sc(C[NH2+]C2CCCC2)nc1C[NH+]1CCCCC1Brc1csc(C[NH2+]CCN2CCN3CCCCC3C2)c1Cc1nc(CCC[NH2+]C2CCN3CCCCC23)cs1CCOC(=O)c1nc(CN2CC3CCC[NH2+]C3C2)cs1CCCC(=O)c1nc(CN2CC3CCC[NH2+]C3C2)cs1CC(C)(C)c1csc(CN2CCC[NH2+]C(C3CC3)C2)n1COCc1nc(CN2CCC([NH3+])C2)cs1CCC[NH2+]Cc1nc(C[NH+]2CC3CCC2C3)cs1CCC1CN2CCCC2C[NH+]1CCc1csc(C)n1 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: epochper_device_train_batch_size: 256per_device_eval_batch_size: 256torch_empty_cache_steps: 1000learning_rate: 3e-05weight_decay: 1e-05max_grad_norm: Nonelr_scheduler_type: warmup_stable_decaylr_scheduler_kwargs: {'num_decay_steps': 6385, 'warmup_type': 'linear', 'decay_type': '1-sqrt'}warmup_steps: 6385seed: 12data_seed: 24681357bf16: Truebf16_full_eval: Truetf32: Truedataloader_num_workers: 8dataloader_prefetch_factor: 2load_best_model_at_end: Trueoptim: stable_adamwoptim_args: decouple_lr=True,max_lr=3e-05dataloader_persistent_workers: Trueresume_from_checkpoint: Falsegradient_checkpointing: Truetorch_compile: Truetorch_compile_backend: inductortorch_compile_mode: max-autotuneeval_on_start: Truebatch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 256per_device_eval_batch_size: 256per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: 1000learning_rate: 3e-05weight_decay: 1e-05adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: Nonenum_train_epochs: 3max_steps: -1lr_scheduler_type: warmup_stable_decaylr_scheduler_kwargs: {'num_decay_steps': 6385, 'warmup_type': 'linear', 'decay_type': '1-sqrt'}warmup_ratio: 0.0warmup_steps: 6385log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 12data_seed: 24681357jit_mode_eval: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Truefp16_full_eval: Falsetf32: Truelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Truedataloader_num_workers: 8dataloader_prefetch_factor: 2past_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: stable_adamwoptim_args: decouple_lr=True,max_lr=3e-05adafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Trueskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Falsehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Truegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Truetorch_compile_backend: inductortorch_compile_mode: max-autotuneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Trueuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | Validation Loss | ndcg@10 |
|---|---|---|---|---|
| 1.0963 | 7000 | 0.0046 | - | - |
| 1.2529 | 8000 | 0.0043 | - | - |
| 1.4096 | 9000 | 0.0038 | - | - |
| 1.5662 | 10000 | 0.0035 | - | - |
| 1.7228 | 11000 | 0.0033 | - | - |
| 1.8794 | 12000 | 0.0031 | - | - |
| 2.0 | 12770 | - | 1.5814 | 0.6986 |
| 2.0360 | 13000 | 0.003 | - | - |
| 2.1926 | 14000 | 0.0027 | - | - |
| 2.3493 | 15000 | 0.0025 | - | - |
| 2.5059 | 16000 | 0.0025 | - | - |
| 2.6625 | 17000 | 0.0024 | - | - |
| 2.8191 | 18000 | 0.0024 | - | - |
| 2.9757 | 19000 | 0.0024 | - | - |
| 3.0 | 19155 | - | 1.5688 | 0.7034 |
- The bold row denotes the saved checkpoint.
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 12.236 kWh
- Carbon Emitted: 2.512 kg of CO2
- Hours Used: 19.958 hours
Training Hardware
- On Cloud: No
- GPU Model: 2 x NVIDIA GeForce RTX 3090
- CPU Model: AMD Ryzen 7 3700X 8-Core Processor
- RAM Size: 62.70 GB
Framework Versions
- Python: 3.13.7
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.9.0+cu128
- Accelerate: 1.11.0
- Datasets: 4.4.1
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
NV-Retriever
@misc{moreira2025nvretrieverimprovingtextembedding,
title={NV-Retriever: Improving text embedding models with effective hard-negative mining},
author={Gabriel de Souza P. Moreira and Radek Osmulski and Mengyao Xu and Ronay Ak and Benedikt Schifferer and Even Oldridge},
year={2025},
eprint={2407.15831},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.15831},
}
- Downloads last month
- 1,093
Model tree for Derify/ChemRanker-alpha-sim
Base model
Derify/ModChemBERT-IR-BASECollection including Derify/ChemRanker-alpha-sim
Papers for Derify/ChemRanker-alpha-sim
NV-Retriever: Improving text embedding models with effective hard-negative mining
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Evaluation results
- Map on Unknownself-reported0.432
- Mrr@10 on Unknownself-reported0.697
- Ndcg@10 on Unknownself-reported0.703