id stringlengths 6 113 | author stringlengths 2 36 | task_category stringclasses 39
values | tags listlengths 1 4.05k | created_time int64 1,646B 1,742B | last_modified timestamp[s]date 2020-05-14 13:13:12 2025-03-18 10:01:09 | downloads int64 0 118M | likes int64 0 4.86k | README stringlengths 30 1.01M | matched_task listlengths 1 10 | is_bionlp stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|---|---|
fathyshalab/massive_play-roberta-large-v1-2-0.64 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,675,873,072,000 | 2023-02-08T16:18:14 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an ef... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
LoneStriker/gemma-7b-4.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:... | 1,708,617,308,000 | 2024-02-22T15:57:48 | 6 | 0 | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
tags: []
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, plea... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
ravimehta/Test | ravimehta | summarization | [
"asteroid",
"summarization",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"region:us"
] | 1,687,455,278,000 | 2023-06-22T17:35:55 | 0 | 0 | ---
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: asteroid
metrics:
- bleurt
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
Ahmed107/nllb200-ar-en_v11.1 | Ahmed107 | translation | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Ahmed107/nllb200-ar-en_v8",
"base_model:finetune:Ahmed107/nllb200-ar-en_v8",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:u... | 1,701,932,253,000 | 2023-12-07T08:02:05 | 7 | 1 | ---
base_model: Ahmed107/nllb200-ar-en_v8
license: cc-by-nc-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: nllb200-ar-en_v11.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proof... | [
"TRANSLATION"
] | Non_BioNLP |
satish860/distilbert-base-uncased-finetuned-emotion | satish860 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,649,756,134,000 | 2022-08-11T12:44:06 | 47 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
muhtasham/medium-mlm-imdb-target-tweet | muhtasham | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,670,742,460,000 | 2022-12-11T07:10:48 | 114 | 0 | ---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: medium-mlm-imdb-target-tweet
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ericzzz/falcon-rw-1b-instruct-openorca | ericzzz | text-generation | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 1,700,859,032,000 | 2024-03-05T00:49:13 | 2,405 | 11 | ---
datasets:
- Open-Orca/SlimOrca
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
inference: false
model-index:
- name: falcon-rw-1b-instruct-openorca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning C... | [
"TRANSLATION"
] | Non_BioNLP |
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
... | 1,716,459,970,000 | 2024-05-23T10:26:22 | 9 | 0 | ---
datasets:
- fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.c... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
PragmaticPete/tinyqwen | PragmaticPete | text-generation | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,718,651,742,000 | 2024-06-17T19:19:41 | 14 | 0 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, inclu... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
Pclanglais/Larth-Mistral | Pclanglais | text-generation | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"fr",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,696,941,413,000 | 2023-10-21T21:16:07 | 20 | 5 | ---
language:
- fr
library_name: transformers
license: cc-by-4.0
pipeline_tag: text-generation
widget:
- text: 'Answer in Etruscan: Who is the father of Lars?'
example_title: Lars
inference:
parameters:
temperature: 0.7
repetition_penalty: 1.2
---
Larth-Mistral is the first LLM based on the Etruscan langua... | [
"TRANSLATION"
] | Non_BioNLP |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",... | 1,716,922,458,000 | 2024-05-28T18:54:49 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pEpOo/catastrophy8 | pEpOo | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | 1,702,908,844,000 | 2023-12-18T14:14:25 | 50 | 0 | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "Rly tragedy in MP: Some live to recount horror: \x89ÛÏWhen I saw coaches\
\... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Anjaan-Khadka/Nepali-Summarization | Anjaan-Khadka | summarization | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"ne",
"dataset:csebuetnlp/xlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,677,152,698,000 | 2023-03-17T08:45:04 | 21 | 0 | ---
datasets:
- csebuetnlp/xlsum
language:
- ne
tags:
- summarization
- mT5
widget:
- text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र
गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा
उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी ... | [
"SUMMARIZATION"
] | Non_BioNLP |
sndsabin/fake-news-classifier | sndsabin | null | [
"license:gpl-3.0",
"region:us"
] | 1,648,716,829,000 | 2022-04-07T08:58:17 | 0 | 0 | ---
license: gpl-3.0
---
**Fake News Classifier**: Text classification model to detect fake news articles!
**Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF | TheBloke | text-generation | [
"transformers",
"gguf",
"solar",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"text-generation",
"en",
"de",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:fblgit/LUNA-SOLARkrautLM-Instruct",
"base_model:quantized:fblgit/LUNA-SOLARkrautLM-Instruct",
"license:cc... | 1,703,336,543,000 | 2023-12-23T13:08:59 | 368 | 4 | ---
base_model: fblgit/LUNA-SOLARkrautLM-Instruct
datasets:
- argilla/distilabel-math-preference-dpo
language:
- en
- de
library_name: transformers
license: cc-by-nc-4.0
model_name: Luna SOLARkrautLM Instruct
pipeline_tag: text-generation
tags:
- finetune
- dpo
- Instruct
- augmentation
- german
inference: false
model_... | [
"TRANSLATION"
] | Non_BioNLP |
halee9/translation_en_ko | halee9 | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ko-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ko-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,710,610,256,000 | 2024-03-16T22:43:22 | 128 | 0 | ---
base_model: Helsinki-NLP/opus-mt-ko-en
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: translation_en_ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete... | [
"TRANSLATION"
] | Non_BioNLP |
lamm-mit/Cephalo-Idefics-2-vision-10b-beta | lamm-mit | image-text-to-text | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"nlp",
"code",
"vision",
"chemistry",
"engineering",
"biology",
"bio-inspired",
"text-generation-inference",
"materials science",
"conversational",
"multilingual",
"arxiv:2405.19076",
"license:apache-2.0",
"endpoints_... | 1,716,909,925,000 | 2024-05-30T10:34:41 | 12 | 0 | ---
language:
- multilingual
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- nlp
- code
- vision
- chemistry
- engineering
- biology
- bio-inspired
- text-generation-inference
- materials science
inference:
parameters:
temperature: 0.3
widget:
- messages:
- role: user
... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
gauravkoradiya/T5-Finetuned-Summarization-DialogueDataset | gauravkoradiya | summarization | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"en",
"dataset:knkarthick/dialogsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,681,607,546,000 | 2023-04-16T01:24:14 | 151 | 1 | ---
datasets:
- knkarthick/dialogsum
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- bleu
- rouge
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
MaLA-LM/lucky52-bloom-7b1-no-5 | MaLA-LM | text-generation | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:u... | 1,712,217,803,000 | 2024-12-10T09:07:41 | 14 | 0 | ---
datasets:
- MBZUAI/Bactrian-X
language:
- multilingual
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel ... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,721,885,085,000 | 2024-07-25T11:07:58 | 26 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
airoboros-l2-13b-3.0 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingf... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
chienweichang/formatted_address | chienweichang | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:cwchang/tw_address_large",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible... | 1,702,956,972,000 | 2023-12-19T04:49:04 | 92 | 0 | ---
base_model: google/mt5-small
datasets:
- cwchang/tw_address_large
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: formatted_address
results:
- task:
type: summarization
name: Summarization
dataset:
name: cwchang/tw_address_large
type: cwchang/... | [
"SUMMARIZATION"
] | Non_BioNLP |
am-azadi/gte-multilingual-base_Fine_Tuned_1e | am-azadi | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:25743",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:... | 1,740,074,285,000 | 2025-02-20T17:58:47 | 11 | 0 | ---
base_model: Alibaba-NLP/gte-multilingual-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:25743
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: م الحين SHIA WAVES... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
rifatul123/Primary_doctor_v1 | rifatul123 | text-generation | [
"adapter-transformers",
"pytorch",
"gpt2",
"biology",
"medical",
"chemistry",
"text-generation-inference",
"text-generation",
"en",
"region:us"
] | 1,683,275,744,000 | 2023-05-05T16:57:39 | 0 | 0 | ---
language:
- en
library_name: adapter-transformers
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- biology
- medical
- chemistry
- text-generation-inference
---

![Scr... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | BioNLP |
Helsinki-NLP/opus-mt-yo-fr | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:09:04 | 57 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256 | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,061,383,000 | 2023-01-30T06:53:58 | 138 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-tc-bible-big-itc-deu_eng_fra_por_spa | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"acf",
"an",
"ast",
"ca",
"cbk",
"co",
"crs",
"de",
"egl",
"en",
"es",
"ext",
"fr",
"frm",
"fro",
"frp",
"fur",
"gcf",
"gl",
"ht",
"it",
"kea",
"la... | 1,728,377,856,000 | 2024-10-08T08:57:47 | 116 | 0 | ---
language:
- acf
- an
- ast
- ca
- cbk
- co
- crs
- de
- egl
- en
- es
- ext
- fr
- frm
- fro
- frp
- fur
- gcf
- gl
- ht
- it
- kea
- la
- lad
- lij
- lld
- lmo
- lou
- mfe
- mo
- mwl
- nap
- oc
- osp
- pap
- pcd
- pms
- pt
- rm
- ro
- rup
- sc
- scn
- vec
- wa
library_name: transformers
license: apache-2.0
tags:
-... | [
"TRANSLATION"
] | Non_BioNLP |
gokuls/BERT-tiny-Massive-intent | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,664,028,930,000 | 2022-09-24T14:26:13 | 10 | 0 | ---
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: BERT-tiny-Massive-intent
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
type: massive
config: en-US
split: train
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
osanseviero/bert-base-uncased-copy | osanseviero | fill-mask | [
"transformers",
"pytorch",
"jax",
"rust",
"coreml",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,204,589,000 | 2023-04-04T06:18:11 | 14 | 0 | ---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
duplicated_from: bert-base-uncased
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
sud977/my-awesome-setfit-model | sud977 | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,682,560,241,000 | 2023-04-27T01:53:28 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# /var/folders/lm/k69sycyx5538ldsk5n0ln5000000gn/T/tmp_un7plj_/killshot977/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YtBig/improve-a-v1 | YtBig | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,666,707,054,000 | 2022-12-08T09:13:15 | 114 | 0 | ---
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.9899872350262614
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1822063032
- CO2 Emissions (in grams): 0.9900
## Validation Metrics
- Loss: 0.347
- Rouge1: 66.429
... | [
"SUMMARIZATION"
] | Non_BioNLP |
gokuls/bert_uncased_L-2_H-768_A-12_massive | gokuls | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:google/bert_uncased_L-2_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-2_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",... | 1,696,613,565,000 | 2023-10-06T17:35:40 | 10 | 0 | ---
base_model: google/bert_uncased_L-2_H-768_A-12
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-2_H-768_A-12_massive
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
nielsr/coref-bert-large | nielsr | null | [
"transformers",
"pytorch",
"safetensors",
"exbert",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:nyu-mll/glue",
"arxiv:2004.06870",
"license:apache-2.0",
"endpoints_compatible",
"... | 1,646,263,745,000 | 2024-12-22T10:40:56 | 38 | 1 | ---
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- nyu-mll/glue
language: en
license: apache-2.0
tags:
- exbert
---
# CorefBERT large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduc... | [
"COREFERENCE_RESOLUTION"
] | Non_BioNLP |
jjae/kobart-summarization-diary | jjae | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"kobart-summarization-diary",
"generated_from_trainer",
"base_model:gogamza/kobart-summarization",
"base_model:finetune:gogamza/kobart-summarization",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,712,159,644,000 | 2024-04-03T16:46:18 | 14 | 0 | ---
base_model: gogamza/kobart-summarization
license: mit
tags:
- kobart-summarization-diary
- generated_from_trainer
model-index:
- name: summary2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete ... | [
"SUMMARIZATION"
] | Non_BioNLP |
tomaarsen/distilroberta-base-nli-v2 | tomaarsen | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"model-index",
"... | 1,714,636,181,000 | 2024-05-02T07:50:07 | 9 | 0 | ---
base_model: distilbert/distilroberta-base
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-... | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
akshitha-k/all-MiniLM-L6-v2-stsb | akshitha-k | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-... | 1,731,273,262,000 | 2024-11-10T21:14:29 | 7 | 0 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: A girl is styling her ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Aryanne/Bling-Sheared-Llama-1.3B-0.1-gguf | Aryanne | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,698,170,639,000 | 2023-10-24T18:52:51 | 159 | 4 | ---
license: apache-2.0
---
Some GGUF v2 quantizations of the model [llmware/bling-sheared-llama-1.3b-0.1](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1)
bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a... | [
"SUMMARIZATION"
] | Non_BioNLP |
aehrm/redewiedergabe-freeindirect | aehrm | token-classification | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"region:us"
] | 1,684,274,249,000 | 2023-08-23T14:11:55 | 9 | 0 | ---
language: de
tags:
- flair
- token-classification
- sequence-tagger-model
---
# REDEWIEDERGABE Tagger: free indirect STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation, that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can... | [
"TRANSLATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-sv-umb | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"umb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:06:25 | 56 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-sv-umb
* source languages: sv
* target languages: umb
* OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* d... | [
"TRANSLATION"
] | Non_BioNLP |
midas/gupshup_e2e_bart | midas | text2text-generation | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-14T02:09:24 | 130 | 0 | ---
{}
---
# Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please requ... | [
"SUMMARIZATION"
] | Non_BioNLP |
TransferGraph/SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:SetFit/distilbert-base-uncased__sst2__train-16-0",
"base_model:adapter:SetFit/distilbert-base-uncased__sst2__train-16-0",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,053,281,000 | 2024-02-27T17:01:26 | 0 | 0 | ---
base_model: SetFit/distilbert-base-uncased__sst2__train-16-0
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony
results:
- task:
type: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
tartuNLP/llammas-prelim | tartuNLP | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"et",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,699,017,294,000 | 2023-11-14T12:17:40 | 9 | 1 | ---
language:
- et
widget:
- text: 'Mida sa tead Juhan Liivi kohta? Vastus:'
---
Llama-2-7B finetuned in three stages:
1. 1B tokens of CulturaX (75% Estonain, 25% English)
2. 1M English->Estonian sentence-pairs from CCMatrix (500000), WikiMatrix (400000), Europarl (50000), and OpenSubtitles (50000) as Alpaca-style tra... | [
"TRANSLATION"
] | Non_BioNLP |
rambodazimi/distilroberta-base-finetuned-LoRA-MRPC | rambodazimi | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,725,642,888,000 | 2024-09-06T17:16:42 | 0 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-LoRA-MRPC
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
marefa-nlp/marefa-ner | marefa-nlp | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"ar",
"dataset:Marefa-NER",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-12-04T05:21:57 | 3,785 | 23 | ---
datasets:
- Marefa-NER
language: ar
widget:
- text: في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية
و رئيس الاتحاد الدولي لكرة القدم
---
# Tebyan تبيـان
## Marefa Arabic Named Entity Recognition Model
## نموذج المعرفة لتصنيف أجزاء النص
<p align="center">
<img src=... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
learn3r/longt5_xl_sfd_bp_20 | learn3r | text2text-generation | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/summ_screen_fd_bp",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatibl... | 1,698,974,157,000 | 2023-11-04T06:54:50 | 18 | 0 | ---
base_model: google/long-t5-tglobal-xl
datasets:
- learn3r/summ_screen_fd_bp
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: longt5_xl_sfd_bp_20
results:
- task:
type: summarization
name: Summarization
dataset:
name: learn3r/summ_screen_fd_bp
t... | [
"SUMMARIZATION"
] | Non_BioNLP |
Yanis23/sparql-translation | Yanis23 | translation | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,682,343,918,000 | 2023-04-24T19:09:42 | 28 | 0 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: sparql-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sp... | [
"TRANSLATION"
] | Non_BioNLP |
Hoax0930/marian-finetuned-kftt_kde4-en-to-ja | Hoax0930 | translation | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,664,867,663,000 | 2022-10-04T08:25:10 | 98 | 0 | ---
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kftt_kde4-en-to-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, th... | [
"TRANSLATION"
] | Non_BioNLP |
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_stsb | gokuls | text-classification | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autot... | 1,697,611,370,000 | 2023-10-18T06:52:03 | 36 | 0 | ---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
datasets:
- glue
language:
- en
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_stsb
results:
- task:
type: text-classification
name: Text Classification
datase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
rootacess/distilbert-base-uncased-distilled-clinc | rootacess | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,679,035,202,000 | 2023-03-17T06:51:16 | 29 | 0 | ---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
research-dump/bge-base-en-v1.5_wikipedia_r_masked_wikipedia_r_masked | research-dump | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 1,738,904,700,000 | 2025-02-07T05:05:17 | 9 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'St. Timothy High School (Cochrane): Luna <3 (She/Her) ( talk ) 04:19, 15
July 2023 (UTC) This... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
google/paligemma-3b-mix-224-keras | google | image-text-to-text | [
"keras-hub",
"image-text-to-text",
"license:gemma",
"region:us"
] | 1,719,427,626,000 | 2024-10-28T21:57:11 | 4 | 1 | ---
library_name: keras-hub
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and cli... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Areeb123/En-Fr_Translation_Model | Areeb123 | translation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"en",
"fr",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_... | 1,701,002,848,000 | 2023-11-27T12:58:18 | 38 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
language:
- en
- fr
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: En-Fr_Translation_Model
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
datas... | [
"TRANSLATION"
] | Non_BioNLP |
Bijayab/a100_80_nepberta | Bijayab | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summary",
"nepali",
"BART",
"NLP",
"ne",
"dataset:csebuetnlp/xlsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,711,130,672,000 | 2024-06-15T15:09:00 | 28 | 0 | ---
datasets:
- csebuetnlp/xlsum
language:
- ne
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- summary
- nepali
- BART
- NLP
---
from transformers import pipeline
input_text = """सांसदको लोगो छोडेर सिलाम साक्मामा मात्र भेटिएपछि उनलाई जिज्ञाशा राखियो ।उनले किराती परम... | [
"SUMMARIZATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-fr-sk | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:37:13 | 41 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
zyxzyx/autotrain-sum-1042335811 | zyxzyx | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"zh",
"dataset:zyxzyx/autotrain-data-sum",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,656,293,128,000 | 2022-06-27T05:15:17 | 96 | 0 | ---
datasets:
- zyxzyx/autotrain-data-sum
language: zh
tags:
- a
- u
- t
- o
- r
- i
- n
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 426.15271368095927
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1042335811
- CO2 Emissions (in grams): 426.15271368095927
## Validation Metri... | [
"SUMMARIZATION"
] | Non_BioNLP |
seongil-dn/bge-m3-kor-retrieval-451949-bs64-finance-50 | seongil-dn | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
... | 1,734,167,762,000 | 2024-12-14T09:17:18 | 82 | 0 | ---
base_model: BAAI/bge-m3
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: 사설묘지의 관리방법에 대한 27.9퍼센트의 견해에 근거하면 ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Xenova/opus-mt-es-it | Xenova | translation | [
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-es-it",
"base_model:quantized:Helsinki-NLP/opus-mt-es-it",
"region:us"
] | 1,693,955,542,000 | 2024-10-08T13:42:05 | 57 | 1 | ---
base_model: Helsinki-NLP/opus-mt-es-it
library_name: transformers.js
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-es-it with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more ... | [
"TRANSLATION"
] | Non_BioNLP |
TehranNLP-org/bert-large-mnli | TehranNLP-org | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:mnli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,165,998,000 | 2023-01-19T22:22:28 | 116 | 0 | ---
datasets:
- mnli
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: '42'
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: MNLI
type: glue
args: mnli
metrics:
- type: accuracy
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
HARDYCHEN/text_summarization_finetuned | HARDYCHEN | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Falconsai/text_summarization",
"base_model:finetune:Falconsai/text_summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible"... | 1,714,016,412,000 | 2024-04-25T03:40:37 | 5 | 0 | ---
base_model: Falconsai/text_summarization
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: text_summarization_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofrea... | [
"SUMMARIZATION"
] | Non_BioNLP |
SEBIS/legal_t5_small_multitask_cs_de | SEBIS | text2text-generation | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation Cszech Deustch model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-06-23T10:50:44 | 175 | 0 | ---
datasets:
- dcep europarl jrc-acquis
language: Cszech Deustch
tags:
- translation Cszech Deustch model
widget:
- text: Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po
ukončení konfliktu a v demokratickém procesu v těchto zemích
---
# legal_t5_small_multitask_cs_de model
Model on tra... | [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/bertin-project_-_bertin-gpt-j-6B-alpaca-4bits | RichardErkhov | null | [
"safetensors",
"gptj",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,731,426,189,000 | 2024-11-12T15:45:23 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bertin-gpt-j-6B-alpaca - bnb 4bits
- Model creator: https://huggingface.co/bertin-project/
- Original model: http... | [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/rawsh_-_simpo-math-model-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2401.08417",
"endpoints_compatible",
"region:us"
] | 1,741,879,944,000 | 2025-03-13T15:39:11 | 352 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
simpo-math-model - GGUF
- Model creator: https://huggingface.co/rawsh/
- Original model: https://huggingface.co/r... | [
"TRANSLATION"
] | Non_BioNLP |
mrapacz/interlinear-en-philta-emb-auto-diacritics-bh | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-auto",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,739,017,600,000 | 2025-02-21T21:32:33 | 45 | 0 | ---
base_model:
- PhilTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining ... | [
"TRANSLATION"
] | Non_BioNLP |
ymoslem/whisper-medium-ga2en-v6.3.1-8k-r | ymoslem | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/... | 1,719,016,140,000 | 2025-03-15T11:12:14 | 36 | 1 | ---
base_model: openai/whisper-medium
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
- ymoslem/EUbookshop-Speech-Irish
language:
- ga
- en
license: apache-2.0
metrics:
- bleu
- wer
... | [
"TRANSLATION"
] | Non_BioNLP |
gaudi/opus-mt-fr-ny-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,663,939,000 | 2024-10-19T04:38:59 | 7 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original... | [
"TRANSLATION"
] | Non_BioNLP |
RayNguyent/finetuning-sentiment-model | RayNguyent | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"en... | 1,690,603,011,000 | 2023-07-29T10:46:52 | 13 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
c... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
cmgx/Snowflake-ATM-Avg-v2 | cmgx | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:800",
"loss:MatryoshkaLoss",
"loss:CustomContrastiveLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"base_model:Snowflake/snowflake-arctic-embed-m-v1.5",
... | 1,727,995,994,000 | 2024-10-03T23:01:03 | 0 | 0 | ---
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extra... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
AhmedSSoliman/DistilBERT-Marian-Model-on-DJANGO | AhmedSSoliman | translation | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Code Generation",
"Machine translation",
"Text generation",
"translation",
"en",
"dataset:AhmedSSoliman/DJANGO",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,673,474,083,000 | 2023-07-30T12:01:43 | 13 | 0 | ---
datasets:
- AhmedSSoliman/DJANGO
language:
- en
license: mit
metrics:
- bleu
- accuracy
pipeline_tag: translation
tags:
- Code Generation
- Machine translation
- Text generation
---
| [
"TRANSLATION"
] | Non_BioNLP |
SoyGema/english-guyarati | SoyGema | translation | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"gu",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-... | 1,694,256,593,000 | 2023-09-11T06:51:34 | 16 | 0 | ---
base_model: t5-small
datasets:
- opus100
language:
- en
- gu
license: apache-2.0
metrics:
- bleu
pipeline_tag: translation
tags:
- generated_from_trainer
model-index:
- name: english-guyarati
results:
- task:
type: translation
name: Translation
dataset:
name: opus100 en-gu
type: opus... | [
"TRANSLATION"
] | Non_BioNLP |
google/t5-efficient-small-el4 | google | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:49:01 | 118 | 0 | ---
datasets:
- c4
language:
- en
license: apache-2.0
tags:
- deep-narrow
inference: false
---
# T5-Efficient-SMALL-EL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
varun-v-rao/bart-base-lora-885K-snli-model1 | varun-v-rao | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
... | 1,718,822,037,000 | 2024-06-19T22:49:03 | 4 | 0 | ---
base_model: facebook/bart-base
datasets:
- stanfordnlp/snli
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bart-base-lora-885K-snli-model1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: snli
type: stanf... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Yongxin-Guo/VTG-LLM | Yongxin-Guo | null | [
"dense-video-caption",
"video-highlight-detection",
"video-summarization",
"moment-retrieval",
"dataset:Yongxin-Guo/VTG-IT",
"arxiv:2405.13382",
"license:apache-2.0",
"region:us"
] | 1,716,262,231,000 | 2024-06-19T08:29:04 | 0 | 3 | ---
datasets:
- Yongxin-Guo/VTG-IT
license: apache-2.0
tags:
- dense-video-caption
- video-highlight-detection
- video-summarization
- moment-retrieval
---
[VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding](https://arxiv.org/abs/2405.13382)
## Overview
We introduce
- VT... | [
"SUMMARIZATION"
] | Non_BioNLP |
mrapacz/interlinear-en-greta-emb-sum-normalized-bh | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-sum",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,739,017,736,000 | 2025-02-21T21:31:34 | 16 | 0 | ---
base_model:
- GreTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining w... | [
"TRANSLATION"
] | Non_BioNLP |
nianlong/memsum-arxiv-summarization | nianlong | null | [
"license:apache-2.0",
"region:us"
] | 1,690,472,626,000 | 2024-03-29T16:00:23 | 0 | 2 | ---
license: apache-2.0
---
[](http://dx.doi.org/10.18653/v1/2022.acl-long.450)
# MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes
Code for ACL 2022 paper on the topic of long doc... | [
"SUMMARIZATION"
] | Non_BioNLP |
SQAI/bge-embedding-model2 | SQAI | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1865",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"license:apache-2.0",
... | 1,719,879,752,000 | 2024-07-02T00:22:56 | 48 | 0 | ---
base_model: SQAI/bge-embedding-model
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
MBZUAI/bactrian-x-bloom-7b1-lora | MBZUAI | null | [
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 1,683,744,074,000 | 2023-06-11T10:11:59 | 0 | 0 | ---
license: mit
---
#### Current Training Steps: 100,000
This repo contains a low-rank adapter (LoRA) for Bloom-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages.
### Dataset ... | [
"TRANSLATION"
] | Non_BioNLP |
michaelfeil/ct2fast-opus-mt-ROMANCE-en | michaelfeil | translation | [
"transformers",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,684,457,458,000 | 2023-05-19T00:51:47 | 352 | 1 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# # Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [Helsinki-NLP/opus-mt-ROMANCE-en](https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en)
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslat... | [
"TRANSLATION"
] | Non_BioNLP |
tsinik/distilbert-base-uncased-finetuned-emotion | tsinik | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,395,064,000 | 2023-04-14T06:26:43 | 13 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
seoultechLLM/Llama-3-70B-PIM-4bit | seoultechLLM | null | [
"license:mit",
"region:us"
] | 1,732,535,944,000 | 2024-11-25T12:05:09 | 0 | 0 | ---
license: mit
---
---
license: mit
---
# Model Architecture
## Base Model: Llama 3 (70 billion parameters)
## Quantization: 4-bit integer quantization for memory and computational efficiency
## Framework: Fine-tuned with PyTorch, leveraging Hugging Face Transformers
## PIM Optimization: Enhanced for PIM hardware t... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
AIFS/Prometh-MOEM-24B | AIFS | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,707,832,848,000 | 2024-03-20T13:43:22 | 0 | 3 | ---
language:
- en
license: apache-2.0
---
# Prometh-MOEM-24B Model Card
**Prometh-MOEM-24B** is a Mixture of Experts (MoE) model that integrates multiple foundational models to deliver enhanced performance across a spectrum of tasks. It harnesses the combined strengths of its constituent models, optimizing for accura... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/lemon-mint_-_gemma-ko-7b-instruct-v0.52-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731,793,513,000 | 2024-11-16T23:04:52 | 106 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-ko-7b-instruct-v0.52 - GGUF
- Model creator: https://huggingface.co/lemon-mint/
- Original model: https://h... | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
DavieLion/Lllma-3.2-1B | DavieLion | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_c... | 1,735,265,661,000 | 2024-12-27T07:19:30 | 15 | 0 | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” me... | [
"SUMMARIZATION"
] | Non_BioNLP |
RichardErkhov/BueormLLC_-_RAGPT-2_unfunctional-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,741,965,343,000 | 2025-03-14T15:18:16 | 307 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RAGPT-2_unfunctional - GGUF
- Model creator: https://huggingface.co/BueormLLC/
- Original model: https://huggingf... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
aasarmehdi/distilbert-base-uncased.finetuned-emotion | aasarmehdi | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,687,090,848,000 | 2023-06-18T15:12:34 | 8 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased.finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Jiiiiiiiiiinw/finetuning-sentiment-model-3000-samples | Jiiiiiiiiiinw | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,376,197,000 | 2023-04-13T09:05:30 | 11 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Kevin123/distilbert-base-uncased-finetuned-cola | Kevin123 | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,663,880,639,000 | 2022-09-22T22:39:03 | 10 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: cola
met... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
nbogdan/flant5-small-1ex-paraphrasing-1epochs | nbogdan | null | [
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | 1,693,842,308,000 | 2023-09-04T15:45:14 | 0 | 0 | ---
datasets:
- self-explanations
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
---
# Adapter `nbogdan/flant5-small-1ex-paraphrasing-1epochs` for google/flan-t5-small
An [adapter](https://adapterhub.ml) for the `google/flan-t5-small` model that was trained on the [self-explanations](https://adapter... | [
"PARAPHRASING"
] | Non_BioNLP |
uboza10300/distilbert-hatexplain | uboza10300 | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:hatexplain",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints... | 1,733,805,785,000 | 2024-12-10T04:58:07 | 9 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- hatexplain
library_name: transformers
license: apache-2.0
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-hatexplain
results:
- task:
type: text-classification
name: Text Classification
d... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YakovElm/Qt15SetFitModel_balance_ratio_3 | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,685,812,689,000 | 2023-06-03T17:18:44 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Qt15SetFitModel_balance_ratio_3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
farleyknight/patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20 | farleyknight | text2text-generation | [
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:farleyknight/big_patent_5_percent",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,663,709,552,000 | 2022-09-23T02:53:23 | 40 | 0 | ---
datasets:
- farleyknight/big_patent_5_percent
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20
results:
- task:
type: summarization
name: Summarization
dataset:
name: farleyknight/big... | [
"SUMMARIZATION"
] | Non_BioNLP |
poltextlab/xlm-roberta-large-english-social-cap-v3 | poltextlab | null | [
"pytorch",
"xlm-roberta",
"arxiv:1910.09700",
"region:us"
] | 1,729,244,656,000 | 2025-02-26T16:08:16 | 99 | 0 | ---
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite ... | [
"TRANSLATION"
] | Non_BioNLP |
TripleH/distilbert-base-uncased-finetuned-emotion | TripleH | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,439,119,000 | 2023-02-03T16:26:23 | 114 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
haophancs/bge-m3-financial-matryoshka | haophancs | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_m... | 1,719,057,886,000 | 2024-07-09T04:46:45 | 37 | 1 | ---
base_model: BAAI/bge-m3
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
TheBloke/airoboros-33B-gpt4-1.4-GPTQ | TheBloke | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 1,687,797,541,000 | 2023-08-21T03:04:25 | 75 | 27 | ---
license: other
inference: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-cont... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
RichardErkhov/hishab_-_titulm-llama-3.2-1b-v1.1-awq | RichardErkhov | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 1,734,802,146,000 | 2024-12-21T17:30:01 | 9 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
titulm-llama-3.2-1b-v1.1 - AWQ
- Model creator: https://huggingface.co/hishab/
- Original model: https://huggingf... | [
"TRANSLATION"
] | Non_BioNLP |
pinzhenchen/sft-lora-es-ollama-7b | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,541,000 | 2024-03-05T23:49:04 | 0 | 0 | ---
language:
- es
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
unsloth/gemma-3-4b-it-GGUF | unsloth | image-text-to-text | [
"transformers",
"gguf",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:230... | 1,741,770,263,000 | 2025-03-16T00:01:23 | 35,605 | 39 | ---
base_model: google/gemma-3-4b-it
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collec... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
caldana/distilbert-base-uncased-finetuned-emotion | caldana | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,654,035,418,000 | 2022-05-31T23:07:12 | 10 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/NbAiLab_-_nb-llama-3.2-1B-awq | RichardErkhov | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 1,736,746,074,000 | 2025-01-13T05:28:31 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
nb-llama-3.2-1B - AWQ
- Model creator: https://huggingface.co/NbAiLab/
- Original model: https://huggingface.co/N... | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
chriswilson2020/distilbert-base-uncased-finetuned-emotion | chriswilson2020 | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_co... | 1,713,623,479,000 | 2024-04-20T14:52:36 | 4 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
eligapris/kin-eng | eligapris | translation | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"en",
"rw",
"dataset:mbazaNLP/NMT_Tourism_parallel_data_en_kin",
"dataset:mbazaNLP/NMT_Education_parallel_data_en_kin",
"dataset:mbazaNLP/Kinyarwanda_English_parallel_dataset",
"license:cc-by-2.0",
"autotrain_compatib... | 1,725,482,595,000 | 2024-09-05T00:00:07 | 0 | 0 | ---
datasets:
- mbazaNLP/NMT_Tourism_parallel_data_en_kin
- mbazaNLP/NMT_Education_parallel_data_en_kin
- mbazaNLP/Kinyarwanda_English_parallel_dataset
language:
- en
- rw
library_name: transformers
license: cc-by-2.0
pipeline_tag: translation
---
## Model Details
### Model Description
<!-- Provide a longer summary o... | [
"TRANSLATION"
] | Non_BioNLP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.