| | --- |
| | pipeline_tag: sentence-similarity |
| | tags: |
| | - sentence-transformers |
| | - feature-extraction |
| | - sentence-similarity |
| | - transformers |
| | inference: false |
| |
|
| |
|
| | --- |
| | |
| | # Setfit Classification Model ON Conversion Dataset With L6 sbert Model as Base |
| |
|
| | This is a Setfit Model with the L6 model as a Base for classification. |
| |
|
| | <!--- Describe your model here --> |
| |
|
| | ## Usage (Setfit) |
| |
|
| |
|
| | ``` |
| | pip install setfit |
| | ``` |
| |
|
| | Then you can use the model like this: |
| |
|
| | ```python |
| | from setfit import SetFitModel |
| | model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-l6") |
| | prediction = model(['i want to buy thing']) |
| | ``` |
| |
|
| |
|
| |
|
| | ## Evaluation Results |
| |
|
| | <!--- Describe how your model was evaluated --> |
| |
|
| | For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) |
| |
|
| |
|
| | ## Training |
| | The model was trained with the parameters: |
| |
|
| | **DataLoader**: |
| |
|
| | `torch.utils.data.dataloader.DataLoader` of length 2163 with parameters: |
| | ``` |
| | {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| |
|
| | **Loss**: |
| |
|
| | `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` |
| |
|
| | Parameters of the fit()-Method: |
| | ``` |
| | { |
| | "epochs": 1, |
| | "evaluation_steps": 0, |
| | "evaluator": "NoneType", |
| | "max_grad_norm": 1, |
| | "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", |
| | "optimizer_params": { |
| | "lr": 2e-05 |
| | }, |
| | "scheduler": "WarmupLinear", |
| | "steps_per_epoch": 2163, |
| | "warmup_steps": 217, |
| | "weight_decay": 0.01 |
| | } |
| | ``` |
| |
|
| |
|
| | ## Full Model Architecture |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel |
| | (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) |
| | (2): Normalize() |
| | ) |
| | ``` |
| |
|
| | ## Dataset Used |
| | https://huggingface.co/datasets/nayan06/conversion1.0 |
| |
|
| | ## Citing & Authors |
| |
|
| | <!--- Describe where people can find more information --> |
| |
|