vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.016404195,
0.025815483,
-0.043938767,
-0.038315304,
-0.011946217,
0.0138474135,
-0.012834899,
0.0153406905,
0.025072487,
0.055972397,
-0.015734041,
-0.0091272015,
0.03738292,
-0.056263767,
0.011800531,
0.06118794,
0.0005262891,
-0.034469206,
-0.0221005,
-0.020949585,
0.0345... | pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Att... |
[
0.027060637,
-0.009185308,
-0.008427369,
0.0024145201,
-0.0022925748,
-0.0000647249,
0.00053233886,
0.023578625,
0.030377554,
0.010716192,
0.029416999,
-0.011549174,
0.017890338,
-0.0443056,
-0.016614601,
0.0406735,
-0.029792216,
-0.028456444,
-0.026415264,
-0.020741986,
0.00... |
import torch
from transformers import OPTForCausalLM, GPT2Tokenizer
device = "cuda" # the device to load the model onto
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
prompt ... |
[
-0.0019299019,
-0.019725611,
-0.0038768675,
0.0036721034,
-0.016490337,
-0.0032489242,
-0.02115896,
0.04002456,
-0.002143198,
0.05441265,
0.044283655,
0.020476412,
0.0118353665,
-0.02029895,
-0.0333356,
0.043409996,
-0.0039178203,
-0.026441874,
-0.043382693,
0.0069961078,
0.0... | OPTConfig
[[autodoc]] OPTConfig
OPTModel
[[autodoc]] OPTModel
- forward
OPTForCausalLM
[[autodoc]] OPTForCausalLM
- forward
OPTForSequenceClassification
[[autodoc]] OPTForSequenceClassification
- forward
OPTForQuestionAnswering
[[autodoc]] OPTForQuestionAnswering
- forward
TFOPTModel
[[autodoc]] TFOPT... |
[
-0.0053262548,
0.0032240688,
-0.012975129,
-0.03928382,
-0.016932186,
-0.018638307,
0.0076202005,
-0.011749303,
0.0069033424,
0.085908264,
0.016287014,
0.0015358683,
0.02605062,
-0.038337566,
0.012939286,
0.04390038,
-0.05118366,
-0.02346993,
-0.027771078,
0.00038105482,
0.05... | Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using facebook/opt-2.7b checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
Below is an expected speedup diagram that compares pure infe... |
[
0.03652069,
0.009794185,
-0.04144042,
0.035313394,
-0.032928985,
-0.02921655,
0.018260345,
-0.008352975,
-0.014879917,
-0.0034596561,
0.011341032,
0.0142838145,
0.07292065,
-0.025051381,
-0.008481251,
-0.0071984995,
-0.015996665,
-0.012857697,
-0.052547537,
-0.0007932309,
0.0... | from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
T5 Version 1.1 includes the following improvements compared to the original T5 model:
GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper.
Dropout was turned o... |
[
0.019776264,
-0.028923137,
0.0039545535,
0.002944062,
-0.016867168,
-0.030545518,
0.013251776,
-0.011839186,
-0.025021031,
0.0206434,
0.026867189,
0.0080070095,
0.047636464,
-0.025566487,
-0.027706351,
0.014741289,
0.001341786,
-0.010007014,
-0.039748333,
-0.00969932,
-0.0327... | Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
No parameter sharing between the embedding and classifier layer.
"xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d... |
[
0.059561133,
-0.002790987,
-0.03773008,
0.033664986,
-0.013414805,
0.011254284,
0.04095204,
-0.020400742,
0.009041067,
0.013934233,
-0.0037884403,
-0.010569241,
0.031075373,
-0.015198929,
0.0037093968,
0.0046372167,
-0.02047602,
0.0018819872,
-0.06019348,
-0.0042307074,
-0.00... | T5v1.1
Overview
T5v1.1 was released in the google-research/text-to-text-transfer-transformer
repository by Colin Raffel et al. It's an improved version of the original T5 model.
This model was contributed by patrickvonplaten. The original code can be
found here.
Usage tips
One can directly plug in the weights of T5v1.1... |
[
0.050833188,
-0.00977252,
0.014209699,
0.024679298,
-0.040779155,
-0.012306135,
0.057535876,
-0.004031667,
0.016140074,
0.049760755,
0.028928801,
0.05279037,
-0.010630463,
0.016421586,
-0.039063267,
0.02551043,
-0.016247315,
-0.002231995,
-0.029062856,
0.012607756,
-0.0250814... | google/t5-v1_1-small
google/t5-v1_1-base
google/t5-v1_1-large
google/t5-v1_1-xl
google/t5-v1_1-xxl.
Refer to T5's documentation page for all API reference, tips, code examples and notebooks. |
[
0.022936646,
0.0028965848,
-0.024005728,
0.022173017,
-0.058646727,
-0.00932322,
0.009836934,
-0.027782222,
-0.021687072,
0.031156074,
0.027435118,
0.021520462,
-0.007858439,
-0.03729288,
-0.0315726,
0.008573474,
-0.028170979,
0.003953517,
-0.068587795,
0.0011081302,
0.014481... | Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5
model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
fine-tuni... |
[
0.023046443,
-0.023104386,
-0.029651836,
-0.01396403,
-0.036271714,
-0.023973517,
-0.009683562,
-0.021279212,
-0.028869618,
0.03279519,
0.000024896968,
0.005019229,
0.034098886,
-0.02763835,
0.000011882644,
0.013167327,
-0.0033117493,
-0.047251727,
-0.05849248,
-0.029956032,
... |
ViTMAE
Overview
The ViTMAE model was proposed in Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li,
Piotr Dollár, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after
... |
[
0.027579397,
-0.003626394,
0.0030460989,
-0.034188617,
-0.044283226,
-0.031122632,
0.015112989,
0.00047137702,
-0.0039156377,
0.016747216,
-0.019697502,
0.01749925,
0.05420429,
-0.04327087,
0.002787587,
0.020435074,
0.017745107,
-0.050097026,
-0.055708356,
-0.023211814,
-0.04... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMAE.
[ViTMAEForPreTraining] is supported by this example script, allowing you to pre-train the model from scratch/further pre-train the model on custom data.
A notebook that illustrates how to visualize ... |
[
0.03205038,
-0.01438227,
-0.016038654,
-0.010025843,
-0.062161572,
-0.030299727,
-0.00921785,
0.011648562,
-0.0045516924,
0.0443588,
0.0048681563,
0.013392479,
0.037679393,
-0.038056456,
-0.004231862,
0.011352298,
0.029141603,
-0.023822319,
-0.068733245,
-0.029357068,
-0.0105... | If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMAEConfig
[[autodoc]] ViTMAEConfig
ViTMAEModel
[[autodoc]] ViTMAEModel
- forward
ViTM... |
[
0.027739102,
-0.0052346596,
-0.01852101,
-0.02123554,
-0.053668518,
-0.025279058,
0.0008491742,
-0.0055633723,
-0.023893518,
0.0118407225,
0.0021118901,
0.018125141,
0.020599322,
-0.044761464,
-0.008348593,
0.019114815,
-0.0017354611,
-0.029096367,
-0.060398288,
-0.016951673,
... |
MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple:
by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [ViTMAEForPreTraining] for this purpose.
After pre-tra... |
[
0.043550722,
-0.01888947,
-0.050167866,
-0.020784248,
0.032561015,
0.005724413,
0.005782714,
-0.011944383,
-0.024675827,
0.044396084,
0.028858913,
-0.034572393,
0.0010621683,
-0.05398657,
0.0019239275,
-0.012221312,
-0.0033468322,
-0.049963813,
-0.0069341552,
-0.002725564,
0.... |
I-BERT
Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running
inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, li... |
[
-0.01933832,
-0.037433363,
0.003331164,
0.000628394,
-0.020554565,
0.038379334,
0.021946492,
0.019743735,
0.0042196997,
0.07021797,
0.040946964,
0.03867664,
0.019149125,
-0.009824568,
-0.03475762,
0.03478465,
-0.022649212,
-0.03889286,
-0.024784401,
-0.020311316,
-0.010527289... | IBertConfig
[[autodoc]] IBertConfig
IBertModel
[[autodoc]] IBertModel
- forward
IBertForMaskedLM
[[autodoc]] IBertForMaskedLM
- forward
IBertForSequenceClassification
[[autodoc]] IBertForSequenceClassification
- forward
IBertForMultipleChoice
[[autodoc]] IBertForMultipleChoice
- forward
IBertForTokenCla... |
[
-0.000852682,
0.008055396,
-0.015788577,
-0.016257254,
0.0014984868,
-0.031079184,
-0.008677859,
-0.022891972,
-0.032016538,
0.08237009,
0.009424814,
-0.0145509755,
0.0044231447,
-0.0007529049,
0.0033979127,
0.0072461953,
-0.04024769,
-0.025498992,
-0.038841657,
-0.007520811,
... |
Decision Transformer
Overview
The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the follow... |
[
0.0025336759,
-0.021628095,
-0.030057507,
-0.0010363462,
-0.035991367,
-0.007431192,
0.0054555484,
0.013371985,
-0.02684102,
0.03782144,
-0.009046366,
0.03280261,
0.035714086,
-0.016734045,
-0.024969358,
0.027395587,
0.009219669,
-0.036767762,
-0.060946863,
-0.0060725035,
-0.... | ViTMAEModel
[[autodoc]] ViTMAEModel
- forward
ViTMAEForPreTraining
[[autodoc]] transformers.ViTMAEForPreTraining
- forward
TFViTMAEModel
[[autodoc]] TFViTMAEModel
- call
TFViTMAEForPreTraining
[[autodoc]] transformers.TFViTMAEForPreTraining
- call |
[
0.03112995,
-0.0069822473,
-0.022773245,
-0.019450111,
-0.018765936,
0.0005501651,
-0.0046090162,
-0.028833078,
-0.017067716,
0.034086563,
-0.0022846549,
0.019059153,
0.01954785,
-0.030005949,
-0.023994986,
-0.003213178,
-0.053707715,
-0.035674825,
-0.052485976,
-0.04041518,
... | Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
[
0.042646684,
-0.025010943,
0.0058023417,
0.010211277,
0.010633522,
-0.001696015,
-0.012146564,
-0.025081318,
0.029050415,
0.06102839,
0.024518324,
-0.02043663,
-0.010943168,
-0.020830724,
-0.0024947608,
-0.020098833,
-0.022378953,
-0.04999374,
-0.07611659,
-0.016833477,
-0.00... | Pegasus
Overview
The Pegasus model was proposed in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract, |
[
0.032700315,
-0.0041165287,
0.0051420373,
-0.002433318,
-0.018640338,
-0.022336517,
-0.03145376,
-0.027409704,
0.012661225,
0.048064824,
0.0017040473,
0.026829911,
-0.028076466,
-0.05067389,
0.009334664,
-0.020046337,
-0.020713098,
-0.008385253,
-0.06980705,
-0.010139126,
-0.... | Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary.
Pegasus achieves SOTA summarization performance on all 12 downstream ta... |
[
0.07826883,
-0.00563189,
-0.0047004623,
0.00078521547,
0.007877426,
-0.013285484,
-0.024375975,
-0.045141768,
0.013321586,
0.03009451,
0.02989234,
0.0010234878,
-0.0074947462,
-0.057734095,
0.020058194,
0.0014359514,
-0.030007865,
-0.037719224,
-0.07838436,
-0.03904777,
-0.00... | This model was contributed by sshleifer. The Authors' code can be found here.
Usage tips
Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pret... |
[
0.04428186,
-0.0027587626,
-0.0051881736,
0.034960855,
-0.027438886,
-0.023104195,
-0.015808878,
0.0023054616,
-0.023656655,
0.01226038,
0.051251363,
0.02716974,
-0.008053179,
-0.06493539,
-0.006193936,
-0.012614522,
0.0053227474,
-0.049551487,
-0.029379582,
-0.017452095,
0.0... | MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.... |
[
0.038304493,
-0.03200195,
0.068666056,
0.05387379,
-0.026231816,
-0.026346931,
-0.017008232,
-0.019497592,
0.0062090117,
0.067112006,
0.013892934,
0.007799037,
0.022893483,
-0.018058656,
-0.011151759,
0.009907079,
0.004054204,
-0.03367112,
-0.06567307,
-0.0065975245,
0.002480... | Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned:
Each checkpoint is 2.2 GB on disk and 568M parameters.
FP16 is not supported (help/ideas on this appreciated!).
Summarizing xsum in fp32 takes about 400ms/sample, with default parameter... |
[
0.07470363,
-0.019903993,
-0.024756433,
0.027901534,
-0.011082735,
0.008379446,
-0.0027781723,
-0.01654922,
0.012550449,
0.04528945,
0.015156389,
0.007128895,
0.031720586,
-0.05748046,
-0.007128895,
0.015381039,
-0.012131102,
-0.020562967,
-0.076920174,
-0.019499624,
-0.02447... |
All models are transformer encoder-decoders with 16 layers in each component.
The implementation is completely inherited from [BartForConditionalGeneration]
Some key configuration differences:
static, sinusoidal position embeddings
the model starts generating with pad_token_id (which has 0 token_embedding) as the pre... |
[
0.036420077,
0.0072339713,
-0.019255733,
0.06549041,
0.007655984,
-0.0009359907,
-0.0022631835,
-0.008701679,
0.042096708,
0.035404257,
0.032685447,
-0.00634513,
0.028188959,
-0.01509536,
-0.01009096,
-0.009000449,
-0.0239763,
-0.007204094,
-0.019718826,
0.026217077,
-0.02644... | Usage Example
thon
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled t... |
[
0.0066321646,
-0.016364804,
0.0020598555,
0.049693123,
0.0018201923,
-0.01669267,
0.025145918,
-0.010463211,
0.034867864,
0.035495088,
0.023763178,
0.019329855,
0.02193853,
-0.06152482,
-0.022637028,
0.0022968457,
-0.0051246923,
-0.05453984,
-0.05662108,
0.004404812,
-0.01101... |
model_name = "google/pegasus-xsum"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").t... |
[
0.058457542,
-0.024880921,
0.03312781,
0.068611875,
-0.037672017,
-0.009754612,
-0.03534381,
-0.004105913,
0.013962208,
0.021402642,
0.0050736605,
-0.011283372,
0.004463559,
-0.04044903,
0.018485375,
-0.0064200913,
0.00009368477,
-0.031556975,
-0.08011264,
-0.008057548,
0.005... | FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning.
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned: |
[
0.04745501,
-0.01217963,
-0.03855169,
-0.010137448,
-0.0004595827,
-0.036582965,
-0.014706646,
-0.019951673,
-0.0027896347,
0.059972554,
0.03593652,
-0.007933655,
0.021670632,
-0.028002864,
0.01933461,
0.013994086,
0.02790002,
-0.045368753,
-0.0628228,
-0.03452609,
0.00693460... |
PoolFormer
Overview
The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture M... |
[
0.037493177,
0.0068297503,
-0.030403882,
-0.014653188,
0.04517572,
-0.019636458,
0.020066561,
-0.009017346,
0.062231556,
0.042150162,
0.00350757,
-0.031353075,
-0.0036781281,
-0.005999205,
-0.033043828,
0.021297548,
0.010559787,
-0.043603618,
-0.07059633,
-0.026369806,
0.0231... | This model was contributed by heytanay. The original code can be found here.
Usage tips
PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the hub.
One can use [PoolFormerImageProcessor] to prepare images for t... |
[
0.0346712,
-0.017321505,
-0.0048800413,
0.029061792,
0.008259074,
-0.0076882676,
-0.008111087,
-0.009802365,
0.01782889,
0.058123585,
-0.0050703106,
-0.01379801,
0.010514111,
-0.020013455,
0.003703194,
-0.0038969864,
0.0038441338,
-0.04044973,
-0.07903905,
-0.0060392716,
-0.0... | Resources
Script to fine-tune pegasus
on the XSUM dataset. Data download instructions at examples/pytorch/summarization/.
Causal language modeling task guide
Translation task guide
Summarization task guide
PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[au... |
[
0.008812628,
-0.0077366284,
-0.018324984,
0.03456398,
0.003313812,
-0.00232858,
0.029203793,
0.025177049,
0.03606906,
0.075412326,
0.011294685,
-0.0012088482,
0.017796887,
0.0034755417,
-0.0096971905,
0.018232567,
0.0046439576,
-0.03752133,
-0.060150307,
-0.0102846995,
-0.051... | PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[autodoc]] PegasusTokenizer
PegasusTokenizerFast
[[autodoc]] PegasusTokenizerFast
PegasusModel
[[autodoc]] PegasusModel
- forward
PegasusForConditionalGeneration
[[autodoc]] PegasusForConditionalGeneration
... |
[
0.014075474,
0.0011402759,
-0.011143083,
0.0460138,
-0.016647497,
-0.033294994,
0.0052182414,
0.052429724,
0.0025561259,
0.05324938,
0.010726189,
0.03728022,
0.052768894,
-0.0070765996,
-0.010069051,
0.009440177,
0.027755251,
-0.038665157,
-0.054606054,
0.0027186438,
0.029535... | TFPegasusModel
[[autodoc]] TFPegasusModel
- call
TFPegasusForConditionalGeneration
[[autodoc]] TFPegasusForConditionalGeneration
- call
FlaxPegasusModel
[[autodoc]] FlaxPegasusModel
- call
- encode
- decode
FlaxPegasusForConditionalGeneration
[[autodoc]] FlaxPegasusForConditionalGeneration
- ca... |
[
0.01234049,
-0.0052403873,
0.014822414,
-0.017878154,
-0.019164054,
-0.04756446,
0.02773672,
0.016578428,
0.027003894,
0.058128197,
0.016440159,
-0.01479476,
0.035949886,
-0.008054157,
-0.032271937,
-0.003010803,
0.011407175,
-0.05237622,
-0.044135395,
-0.04372059,
0.00734207... | [PoolFormerForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
[
0.0668204,
0.012450449,
0.0006172041,
-0.014652417,
-0.0167051,
-0.025647327,
0.051294655,
0.0010086133,
-0.0045905435,
0.043412358,
0.05783338,
0.006710404,
0.020317819,
-0.03911292,
-0.029976621,
0.022990039,
0.032574195,
-0.050757226,
-0.048249222,
-0.034276057,
0.00543027... |
| Model variant | Depths | Hidden sizes | Params (M) | ImageNet-1k Top 1 |
| :---------------: | ------------- | ------------------- | :------------: | :-------------------: |
| s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 |
| s24 | [4, 4, 12, 4]... |
[
0.014933781,
0.0021440256,
-0.03054391,
-0.008758716,
-0.021196771,
-0.011984901,
0.00020079105,
0.021575525,
0.03122026,
0.061574794,
0.016164735,
0.0038551881,
0.052268233,
-0.029218268,
-0.02227893,
0.029082997,
0.027784407,
-0.04531537,
-0.05443255,
-0.039065905,
0.006296... |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
PoolFormerConfig
[[autodoc]] PoolFormerConfig
PoolFormerFeatureExtractor
[[autodoc]] PoolFo... |
[
0.0024476484,
0.036942415,
-0.0054289415,
-0.02599915,
-0.015938176,
-0.00076355605,
-0.00020789892,
0.0021541442,
-0.019495804,
0.038792383,
-0.007492366,
0.005411153,
0.033384785,
-0.019339269,
-0.025316086,
0.024604559,
-0.020904625,
-0.039418526,
-0.020022335,
0.017005464,
... |
The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times
in parallel on a GPU.
The kernels provide a fast_hash function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these
ha... |
[
0.02358021,
0.01547135,
-0.045364074,
-0.020582082,
0.005208772,
-0.014965338,
0.0017805336,
-0.016078565,
-0.0012294537,
0.039848533,
0.0024921144,
0.012131664,
0.034990806,
-0.035800427,
-0.01817852,
-0.0097660525,
-0.03557272,
-0.04293521,
-0.025781367,
-0.028564438,
-0.00... | YOSO Attention Algorithm. Taken from the original paper.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
[
0.014806424,
-0.009648611,
-0.019548178,
-0.0048903464,
-0.03497539,
0.00809004,
0.013723349,
0.016576327,
0.027790112,
0.07967204,
0.02489751,
0.020142548,
0.0544179,
-0.0070730066,
-0.0585917,
0.027975028,
-0.0062111695,
-0.042900328,
-0.031118587,
0.0056167995,
0.013987514... | YosoConfig
[[autodoc]] YosoConfig
YosoModel
[[autodoc]] YosoModel
- forward
YosoForMaskedLM
[[autodoc]] YosoForMaskedLM
- forward
YosoForSequenceClassification
[[autodoc]] YosoForSequenceClassification
- forward
YosoForMultipleChoice
[[autodoc]] YosoForMultipleChoice
- forward
YosoForTokenClassification... |
[
0.025037885,
-0.010052332,
-0.05284981,
-0.011432203,
0.03463264,
0.013398342,
-0.0011296361,
-0.0074141305,
0.020190459,
0.031172236,
-0.015114245,
0.00095357734,
0.038607817,
-0.026453504,
-0.026038827,
0.039408572,
0.006634825,
-0.02775473,
-0.04203962,
0.025295269,
0.0096... | Trajectory Transformer
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
[
0.033008672,
0.03154552,
-0.00088383426,
-0.019313585,
-0.004283373,
-0.018143063,
0.009876264,
0.00071328576,
-0.010029895,
0.063208096,
0.02073284,
0.0094885295,
0.01820159,
-0.013519509,
-0.034501083,
0.0135853505,
-0.027287753,
-0.041231576,
-0.02412735,
-0.015246026,
0.0... |
YOSO
Overview
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention
via a Bernoulli sampling scheme based on Locality S... |
[
0.08809215,
0.02637163,
-0.012389801,
0.0051777726,
-0.012832031,
0.030926595,
0.012094982,
-0.004650782,
0.0014197415,
0.018529423,
0.017468072,
0.0045844475,
-0.015551744,
-0.036439724,
-0.028862856,
0.044429343,
-0.018278826,
-0.03172261,
-0.018219862,
0.020475235,
0.02311... | The architecture is similar to LLaMA but with RoPE applied to 25% of head embedding dimensions, LayerNorm instead of RMSNorm, and optional QKV bias terms.
StableLM 3B 4E1T-based models uses the same tokenizer as [GPTNeoXTokenizerFast].
StableLM 3B 4E1T and StableLM Zephyr 3B can be found on the Huggingface Hub
The fol... |
[
0.08022141,
-0.003794355,
-0.012121367,
-0.016308747,
0.009564861,
-0.0068136775,
-0.021524608,
-0.02393419,
0.0010624562,
0.0012158098,
0.013223309,
0.007034066,
0.023566876,
-0.0138844745,
-0.018747713,
0.02822442,
-0.022861632,
-0.015794508,
-0.037025265,
-0.009667709,
0.0... |
StableLM
Overview
StableLM 3B 4E1T was proposed in StableLM 3B 4E1T: Technical Report by Stability AI and is the first model in a series of multi-epoch pre-trained language models.
Model Details
StableLM 3B 4E1T is a decoder-only base language model pre-trained on 1 trillion tokens of diverse English and code datasets... |
[
0.022438964,
0.013010141,
-0.018694183,
0.00073558197,
0.01732704,
-0.0234346,
-0.01786201,
-0.023196837,
0.01728246,
0.07287462,
0.0017507222,
0.009005306,
0.010996578,
-0.016390845,
0.021755394,
0.022424104,
-0.021160983,
-0.021889135,
-0.06395848,
0.010335297,
0.021859415,... |
Overview
The Trajectory Transformer model was proposed in Offline Reinforcement Learning as One Big Sequence Modeling Problem by Michael Janner, Qiyang Li, Sergey Levine.
The abstract from the paper is the following:
Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-ste... |
[
0.029203022,
0.022282666,
-0.034177482,
-0.028325174,
-0.01174851,
0.007966455,
-0.000051807794,
0.002814595,
0.024345605,
0.050944347,
-0.003862524,
0.0038296045,
0.04547244,
-0.04997872,
0.021053681,
0.07941583,
0.010519525,
-0.050827302,
-0.017542295,
-0.004444097,
0.03616... | Combining StableLM and Flash Attention 2
First, make sure to install the latest version of Flash Attention v2.
pip install -U flash-attn --no-build-isolation
Also make sure that your hardware is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash-attn repository. Note: you ... |
[
0.010767435,
0.010252872,
-0.002751379,
-0.0067392467,
-0.024053928,
0.020428943,
-0.004581152,
0.005045795,
0.015513708,
0.03714074,
0.027617471,
-0.004189469,
0.023562403,
-0.067338705,
-0.009100864,
0.024914093,
-0.030274771,
-0.03164182,
-0.046510402,
0.007461172,
-0.0042... |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t", torch_dtype=torch.bfloat16, attn_implemen... |
[
0.009778365,
0.03844692,
0.007960543,
-0.0041885637,
-0.02402554,
-0.0034065216,
0.007006187,
0.009331483,
0.00012237157,
0.03481128,
0.023783164,
-0.008142325,
0.017117819,
-0.06501741,
-0.011293216,
0.0062828455,
-0.030827222,
-0.04093128,
-0.040295042,
0.011255344,
-0.0104... |
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t")
model.to(device)
model_inputs = tokenizer("The weather... |
[
0.004329118,
0.0064536543,
-0.00062910625,
0.002322982,
-0.025974708,
0.019811219,
0.0041990443,
0.007464227,
-0.0039455676,
0.057739362,
0.040769752,
0.012507082,
0.028042546,
-0.03804821,
-0.014301431,
0.042290613,
-0.020438239,
-0.040716387,
-0.04415834,
0.00066287536,
0.0... | StableLmConfig
[[autodoc]] StableLmConfig
StableLmModel
[[autodoc]] StableLmModel
- forward
StableLmForCausalLM
[[autodoc]] StableLmForCausalLM
- forward
StableLmForSequenceClassification
[[autodoc]] StableLmForSequenceClassification
- forward |
[
0.019207584,
-0.030552415,
-0.029597651,
0.012594447,
0.015388533,
-0.014700541,
-0.028586727,
-0.021271557,
-0.010544514,
0.035803612,
-0.026943972,
-0.028179549,
0.030580496,
-0.02558203,
-0.01867404,
-0.01936203,
-0.028474402,
-0.023532098,
-0.026326185,
-0.018126456,
0.01... |
BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, havi... |
[
0.01637098,
0.0015854277,
-0.01620742,
-0.010230005,
-0.023805577,
0.0038250997,
-0.04213929,
0.0014200079,
-0.029247701,
0.042763796,
0.00037428545,
-0.014884062,
0.013471488,
-0.053648047,
-0.010809903,
0.0024552755,
-0.013003109,
-0.020341055,
-0.0034347835,
-0.029039532,
... |
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
For transformers v3.x:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
IN... |
[
0.013379292,
0.000086295186,
-0.035800926,
0.005223527,
0.02772576,
-0.015920052,
0.0026309537,
-0.02227579,
-0.0041066674,
0.029046034,
-0.013548164,
-0.047898322,
0.018652713,
-0.03960823,
0.03205503,
-0.019972987,
0.0051083867,
-0.0125272535,
-0.07596183,
-0.039301187,
0.0... |
BridgeTower
Overview
The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a
bridge between each uni-modal encoder and the cross-mo... |
[
0.00769682,
-0.015062451,
-0.0158838,
-0.011677706,
0.0033781223,
-0.010598031,
-0.015512868,
0.009657456,
-0.0059812637,
0.076199844,
-0.005133421,
-0.04289553,
-0.012141369,
-0.032350488,
-0.010598031,
-0.000014010825,
-0.018705525,
-0.049068883,
-0.030495834,
0.00040094505,
... | This implementation is the same as BERT, except for tokenization method. Refer to BERT documentation for
API reference information.
BertweetTokenizer
[[autodoc]] BertweetTokenizer |
[
0.0030694597,
-0.0032430969,
-0.03067966,
-0.015227806,
0.019238267,
-0.00533608,
-0.009477236,
-0.0013274853,
0.0063965744,
0.019253204,
-0.046333157,
-0.04463039,
0.018386886,
-0.036803644,
0.02834956,
-0.056669246,
-0.026975397,
-0.019865602,
-0.08687094,
-0.016534753,
0.0... |
BridgeTower architecture. Taken from the original paper.
This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here.
Usage tips and examples
BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge l... |
[
0.012253568,
0.015190615,
-0.008620711,
-0.011081678,
0.0063355267,
-0.019995362,
-0.00052551914,
-0.012912755,
0.01026868,
0.04221731,
-0.025810862,
-0.009082142,
0.024961242,
-0.021738546,
-0.0026990075,
0.00455572,
0.0017862157,
-0.037764132,
-0.04585017,
-0.03462933,
0.00... |
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a... |
[
0.002990242,
-0.001961399,
0.021710904,
0.004389968,
0.0024517488,
-0.0019881455,
-0.009707143,
-0.013922367,
0.03771592,
0.037459157,
-0.03472033,
-0.048129167,
0.011533027,
-0.015420163,
-0.002196767,
-0.021568257,
0.013201999,
-0.033835918,
-0.056031823,
-0.021282963,
0.00... | The following example shows how to run image-text retrieval using [BridgeTowerProcessor] and [BridgeTowerForImageAndTextRetrieval].
thon |
[
0.01822485,
0.011906528,
-0.032293648,
-0.012748971,
0.010172499,
-0.021369971,
-0.00023803402,
-0.03549493,
0.0018340686,
0.026017448,
0.0049282913,
-0.0052477177,
0.01863203,
-0.0556574,
-0.008670143,
0.04217831,
-0.013317619,
-0.055966295,
-0.035888072,
-0.032012835,
0.018... | The following example shows how to run masked language modeling using [BridgeTowerProcessor] and [BridgeTowerForMaskedLM].
thon |
[
0.015424129,
0.018619856,
-0.0016598814,
-0.022370094,
0.0027433645,
-0.0177735,
-0.004731574,
-0.008952416,
0.012024108,
0.041267205,
-0.024048217,
-0.014008669,
0.02555123,
-0.022953788,
-0.0041405833,
0.0066760066,
-0.0020976523,
-0.05048958,
-0.053670716,
-0.027579568,
0.... |
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring... |
[
0.03810415,
-0.008214976,
-0.015179517,
-0.022022804,
0.011231176,
-0.034133073,
0.00049306906,
0.0069114924,
0.024266008,
0.03861948,
-0.0034860598,
-0.008518112,
0.019567406,
-0.026857818,
0.010897726,
0.011481263,
0.038498227,
-0.029540569,
-0.038831674,
-0.019446151,
0.01... |
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a looking out of the window"
processor = BridgeTowerProcessor.from_p... |
[
-0.006467124,
-0.03433629,
-0.024719393,
-0.011053276,
0.016826151,
-0.0009917853,
0.010034131,
0.02831718,
0.009049186,
0.03228432,
-0.0052291024,
-0.035430674,
0.029986115,
-0.012270778,
0.018481405,
0.015895924,
-0.0013859344,
-0.024377398,
-0.053706884,
-0.022503266,
0.02... |
BridgeTowerConfig
[[autodoc]] BridgeTowerConfig
BridgeTowerTextConfig
[[autodoc]] BridgeTowerTextConfig
BridgeTowerVisionConfig
[[autodoc]] BridgeTowerVisionConfig
BridgeTowerImageProcessor
[[autodoc]] BridgeTowerImageProcessor
- preprocess
BridgeTowerProcessor
[[autodoc]] BridgeTowerProcessor
- call
BridgeTo... |
[
0.0011203274,
-0.026970359,
-0.04737063,
-0.020490272,
0.037830506,
0.0014925199,
-0.01202266,
-0.0037294247,
0.03396045,
0.007912605,
-0.021570288,
-0.038130507,
0.03669049,
-0.04200056,
0.018240243,
-0.010432639,
-0.00047719385,
-0.002791912,
-0.032010425,
-0.018690249,
0.0... | Tips:
This implementation of BridgeTower uses [RobertaTokenizer] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings.
Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released.
Please refer to Table 5 for BridgeTower's pe... |
[
0.06645913,
-0.0067100837,
-0.027635502,
-0.015582461,
-0.0021901936,
-0.015917268,
-0.0024901247,
-0.03370387,
0.004530353,
0.03654973,
-0.015749864,
0.010134877,
0.020674312,
-0.05465719,
-0.0015946912,
-0.022920307,
-0.02559876,
-0.035908017,
-0.04824006,
-0.0118647115,
-0... |
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a
left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme,
where spans of text are replaced with a single mask tok... |
[
0.040694915,
-0.019349335,
-0.004270542,
-0.039411616,
0.0073789833,
-0.017666783,
-0.0059816106,
-0.043404106,
0.0053435247,
0.07614256,
-0.012661908,
0.0079849865,
0.021972973,
-0.042149324,
-0.00069244805,
-0.0009041037,
-0.033793606,
-0.045628496,
-0.072891526,
-0.018607872... | BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a... |
[
0.040305752,
-0.021706417,
0.010824439,
-0.03006389,
0.02277088,
-0.04519653,
-0.0023662727,
-0.029172042,
-0.011637171,
0.05730839,
-0.018757567,
-0.015247715,
0.019318568,
-0.033343587,
0.0026162057,
0.024007957,
-0.05736593,
-0.049626995,
-0.061796397,
-0.023216803,
0.0055... | BART
Overview
The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,
Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to th... |
[
0.03174949,
-0.014214551,
0.011305782,
-0.024065947,
-0.009096764,
-0.04596404,
-0.021006253,
-0.01487314,
-0.043905947,
0.084464066,
0.052659698,
0.009714191,
0.0060919505,
-0.029252337,
-0.031063458,
0.017672144,
-0.0392135,
-0.07035928,
-0.0048399447,
0.0033186723,
-0.0236... | mask random tokens (like in BERT)
delete random tokens
mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
permute sentences
rotate the document to make it start at a specific token
Implementation Notes |
[
0.035715193,
-0.011465306,
0.006122605,
-0.015947927,
-0.00018449813,
-0.010831179,
0.009402571,
-0.008914221,
0.020321216,
0.057873193,
0.0018203816,
0.012762715,
0.022974346,
-0.037260424,
-0.0145994965,
0.015466866,
-0.02196849,
-0.057785727,
-0.059185177,
-0.023528295,
-0... |
Bart doesn't use token_type_ids for sequence classification. Use [BartTokenizer] or
[~BartTokenizer.encode] to get the proper splitting.
The forward pass of [BartModel] will create the decoder_input_ids if they are not passed.
This is different than some other modeling APIs. A typical use case of this feature is ... |
[
0.05771,
-0.01515396,
-0.004598493,
-0.013170724,
-0.025266286,
-0.0033253715,
0.021633985,
-0.020515237,
0.037165705,
0.07351778,
-0.0012894669,
-0.013221576,
0.0025062878,
-0.03745629,
0.010490085,
0.0066180527,
-0.002884047,
-0.0409433,
-0.062359344,
-0.0019705233,
0.04788... | This model was contributed by sshleifer. The authors' code can be found here.
Usage tips:
BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left. |
[
0.049906734,
-0.014585316,
-0.003961894,
0.012768536,
-0.022706104,
-0.053875923,
0.015847577,
0.008244826,
0.018240765,
0.028951744,
0.013702463,
0.0067271944,
0.042522874,
-0.03149086,
-0.03149086,
0.017584099,
-0.037940793,
-0.017233875,
0.0043960246,
-0.024209144,
-0.0328... |
Mask Filling
The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks.
thon
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretraine... |
[
0.06576942,
-0.011145057,
-0.012153044,
-0.0017353833,
0.015084073,
-0.017057156,
0.057648323,
-0.0046574757,
0.03474341,
0.041234564,
-0.018930154,
0.005036365,
0.027365796,
-0.06456841,
0.012074407,
0.0011018164,
0.011523946,
-0.042607144,
-0.04326484,
-0.0051614693,
-0.028... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicat... |
[
0.061718,
-0.0047929324,
-0.030513564,
0.0031377154,
-0.025663057,
-0.035205744,
0.007462869,
-0.01925809,
0.022640489,
0.047842965,
-0.01781877,
0.009679421,
0.02390709,
-0.05193063,
0.029174997,
-0.02859927,
-0.012845922,
-0.05716975,
-0.07444158,
0.001475302,
-0.01001766,
... |
A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker.
A notebook on how to finetune BART for summarization with fastai using blurr. 🌎
A notebook on how to finetune BART for summarization in two languages with Trainer class. 🌎
[BartForConditionalGeneration] ... |
[
0.05106684,
-0.010194422,
-0.006510876,
-0.008751608,
-0.025868619,
-0.026874216,
0.020243105,
-0.0013316873,
-0.0025394969,
0.039932404,
0.016789097,
0.018479666,
0.055118375,
-0.038562458,
0.008635018,
0.046140872,
-0.0020439853,
-0.03092575,
-0.043488428,
-0.012686553,
0.0... | [BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
[FlaxBartForConditionalGeneration] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked ... |
[
0.024928866,
-0.0058732177,
0.017246295,
0.0037874356,
-0.015551821,
-0.016930375,
0.022861034,
-0.0094560245,
-0.0044156834,
0.028073693,
-0.024426267,
-0.0076753907,
0.03948985,
-0.059909694,
-0.00040993156,
-0.0076753907,
-0.0009172415,
-0.046353903,
-0.053907234,
-0.0096570... | A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. 🌎
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
Translation task guide |
[
0.025220696,
-0.05414098,
-0.026159398,
0.025248306,
-0.015612812,
-0.016247816,
-0.005911748,
-0.017959565,
0.011519798,
0.087354444,
-0.009587178,
-0.0023933433,
0.038735233,
-0.06289299,
0.004807393,
0.007799504,
-0.061346892,
-0.02622842,
-0.060132105,
-0.005245684,
0.003... | See also:
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
- Distilled checkpoints are described in this paper.
BartConfig
[[autodoc]] BartConfig
- all
BartTokenizer
[[autodoc]] BartTokenizer
- all
BartTokenizerFast
[[autodoc]] BartTokenizerFast
- all |
[
0.028580574,
-0.021936728,
-0.0028122724,
-0.010152922,
-0.014049665,
0.012118003,
0.0031832317,
0.003936847,
0.011389453,
0.06582355,
0.027831972,
-0.0011830593,
0.03021146,
-0.032270115,
0.005865167,
0.020947503,
-0.03178887,
-0.05486187,
-0.060957633,
-0.032350324,
0.00431... | BartModel
[[autodoc]] BartModel
- forward
BartForConditionalGeneration
[[autodoc]] BartForConditionalGeneration
- forward
BartForSequenceClassification
[[autodoc]] BartForSequenceClassification
- forward
BartForQuestionAnswering
[[autodoc]] BartForQuestionAnswering
- forward
BartForCausalLM
[[autodoc]] ... |
[
-0.0094854655,
-0.031932093,
0.010253996,
0.0068786936,
0.029938068,
-0.025866935,
0.008585385,
0.010953289,
-0.004005359,
0.04098829,
0.011098688,
-0.00887618,
0.057162046,
-0.02636544,
-0.008405369,
0.02791635,
-0.013134254,
-0.035753973,
-0.06862769,
-0.005497416,
0.006670... | TFBartModel
[[autodoc]] TFBartModel
- call
TFBartForConditionalGeneration
[[autodoc]] TFBartForConditionalGeneration
- call
TFBartForSequenceClassification
[[autodoc]] TFBartForSequenceClassification
- call |
[
0.0065780533,
-0.020072812,
-0.009523298,
0.034754563,
-0.008066068,
-0.020469615,
-0.021755809,
0.027215285,
-0.018622424,
0.041842304,
0.03899626,
0.0074503385,
0.041924402,
-0.024273463,
0.002341486,
0.013607643,
-0.004857429,
-0.01921079,
-0.041623376,
0.023151465,
0.0464... | FlaxBartModel
[[autodoc]] FlaxBartModel
- call
- encode
- decode
FlaxBartForConditionalGeneration
[[autodoc]] FlaxBartForConditionalGeneration
- call
- encode
- decode
FlaxBartForSequenceClassification
[[autodoc]] FlaxBartForSequenceClassification
- call
- encode
- decode
FlaxBartFor... |
[
0.054099984,
-0.009849887,
-0.038536567,
-0.0016962042,
0.010035874,
-0.008250397,
-0.009596945,
-0.015094729,
0.017304258,
0.02401468,
0.018137481,
0.0075213267,
0.037167702,
-0.035650045,
-0.010407849,
0.030338248,
0.004203313,
-0.016292488,
-0.056361593,
0.022050655,
-0.01... | TAPEX
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
[
0.07762159,
-0.0097738085,
0.027180197,
0.00926023,
0.0032592532,
-0.017919967,
-0.007624677,
-0.012894791,
0.023561437,
0.056572735,
0.00046790036,
-0.00032962902,
-0.00181333,
-0.053949527,
0.01211257,
0.011290843,
-0.030577721,
-0.034828577,
-0.081540585,
0.018251818,
-0.0... |
TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
Sentences + tables are presented to the model as sentence + " " + linearized table. The linearized... |
[
0.07004545,
-0.005637897,
-0.009395227,
0.008839431,
-0.045894995,
0.03191636,
-0.0137121575,
0.008093295,
0.03977363,
0.053478178,
0.02873386,
-0.009448522,
-0.021790225,
-0.05262545,
0.015143521,
0.045346815,
-0.017648408,
-0.04236227,
-0.050036814,
0.01618659,
-0.007461363... | Usage: inference
Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model.
We use the Auto API, which will automatically instantiate the appropriate tokenizer ([TapexTokenizer]) and model ([BartForConditionalGeneration]) for us,
... |
[
0.05091841,
0.010124509,
0.0047086542,
-0.0059426464,
-0.03152092,
-0.006960149,
-0.031030212,
0.018228445,
0.018646993,
0.044337124,
-0.0005092022,
-0.014793473,
0.013032689,
-0.03732285,
-0.0077142552,
0.020032527,
-0.0370342,
-0.054064732,
-0.053198773,
0.0014892698,
0.013... |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq")
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di ... |
[
0.05007237,
-0.007778087,
-0.0029895622,
0.019512398,
-0.027932439,
-0.024961539,
0.003415043,
-0.02933578,
0.007416055,
0.0723168,
0.037203442,
0.0022524355,
-0.0039375634,
-0.031560224,
0.012652455,
0.0012064619,
-0.021020243,
-0.021050101,
-0.05702935,
-0.008793269,
0.0120... |
Overview
The TAPEX model was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu,
Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after
which it can be fine-tuned to answer natural language quest... |
[
0.034993295,
-0.0080896085,
0.01717099,
0.010879895,
-0.014891393,
-0.0050846855,
-0.010102759,
-0.0051216916,
0.0024627787,
0.030241666,
0.009296019,
-0.011272163,
0.009725293,
-0.052341912,
-0.011272163,
0.023476887,
-0.03807223,
-0.058529392,
-0.06673002,
-0.010532034,
0.0... |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
prepare table + sentence
data = {"Actors": ["Brad Pitt",... |
[
0.032417078,
-0.003758451,
-0.024767317,
0.017131543,
-0.072553836,
-0.013411551,
-0.03353587,
0.043716904,
-0.004587058,
0.047744565,
0.028892873,
-0.0030382269,
-0.020627776,
-0.026263705,
-0.037339773,
0.03429106,
-0.03938157,
-0.036920223,
-0.047688622,
-0.0023896755,
0.0... | Note that [TapexTokenizer] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table
and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this:
thon |
[
0.054300208,
0.0144766625,
-0.0009715942,
-0.030916508,
-0.040397957,
-0.01107381,
-0.025841314,
0.030451162,
0.0011979056,
0.039670855,
0.004191761,
-0.02629212,
0.009786835,
-0.012295347,
-0.025085125,
0.0005553265,
-0.039758105,
-0.049646735,
-0.045953043,
-0.0051515396,
0... |
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
questions = [
"how many movies does Leonardo Di Caprio have?",
"which actor has 69 movies?",
"what's the first name of the actor... |
[
0.06294973,
-0.015944116,
-0.0012705468,
-0.0053922115,
0.010548214,
-0.0049603917,
-0.01365584,
-0.025230087,
0.039446924,
0.055774882,
-0.003875306,
0.026071584,
-0.008843079,
-0.05630635,
0.00947051,
0.01134542,
-0.028256517,
-0.033630274,
-0.08810601,
0.01629843,
-0.04541... | TAPEX architecture is the same as BART, except for tokenization. Refer to BART documentation for information on
configuration classes and their parameters. TAPEX-specific tokenizer is documented below.
TapexTokenizer
[[autodoc]] TapexTokenizer
- call
- save_vocabulary |
[
0.062416993,
-0.03582858,
0.035706192,
0.037664372,
0.0012429855,
0.0046277307,
-0.010922974,
-0.049229875,
0.030275302,
0.063090116,
0.026511924,
-0.0037997111,
0.012077995,
-0.04124417,
-0.003417254,
0.011772029,
-0.018740397,
0.005595347,
-0.042345647,
-0.011817924,
0.0225... | In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents
of a table), one can instantiate a [BartForSequenceClassification] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important
benchmark for table fact checking (it a... |
[
0.01897632,
-0.013844674,
-0.05721604,
-0.016320782,
0.020555286,
-0.0021926118,
0.015215505,
-0.021904586,
0.0030771925,
0.06287161,
0.036459796,
-0.021272998,
0.04231633,
-0.05721604,
0.024072077,
0.0013546824,
0.012832699,
-0.029368795,
-0.037608135,
-0.009222606,
0.020914... |
EfficientFormer
Overview
The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed
by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a
dimension-consistent pure transformer that can be run on mobil... |
[
0.044755775,
0.0028388565,
-0.013831838,
0.022849586,
-0.031631485,
0.0039539356,
-0.016384555,
0.018978892,
0.017591545,
0.037763555,
-0.0028475274,
0.027830157,
0.004384013,
-0.062319577,
-0.01705048,
0.034683645,
-0.025957242,
-0.076193035,
-0.035765775,
-0.020713074,
0.02... | from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/madlad400-3b-mt")
tokenizer = AutoTokenizer.from_pretrained("google/madlad400-3b-mt")
inputs = tokenizer("<2pt> I love pizza!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batc... |
[
0.037837636,
-0.034743436,
-0.011374866,
0.027199484,
0.022793934,
0.010652886,
0.0072492664,
-0.017239109,
-0.005462735,
0.06412359,
0.007809169,
0.02880552,
0.00865639,
-0.046648737,
-0.010350833,
0.02439997,
0.005470102,
-0.0398415,
-0.017990557,
-0.03492025,
0.033093195,
... |
MADLAD-400
Overview
MADLAD-400 models were released in the paper MADLAD-400: A Multilingual And Document-Level Large Audited Dataset.
The abstract from the paper is the following:
We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We... |
[
0.010883255,
-0.021846928,
-0.018429156,
-0.03417771,
0.019568413,
-0.008986727,
-0.0005491051,
0.01573515,
-0.0049591186,
0.056185473,
0.021820122,
-0.019648831,
0.038734738,
-0.031175433,
0.012203453,
0.04114728,
-0.018295126,
-0.033373527,
-0.068837926,
-0.026940078,
0.010... | Image classification task guide
EfficientFormerConfig
[[autodoc]] EfficientFormerConfig
EfficientFormerImageProcessor
[[autodoc]] EfficientFormerImageProcessor
- preprocess
EfficientFormerModel
[[autodoc]] EfficientFormerModel
- forward
EfficientFormerForImageClassification
[[autodoc]] EfficientFormerForImage... |
[
-0.0042962097,
-0.038570415,
0.010556401,
-0.009533494,
0.019448873,
-0.011470198,
0.019476151,
0.026404642,
-0.01177025,
0.0660116,
0.030196216,
-0.0074467636,
0.0649205,
-0.044516914,
-0.007542235,
0.033824127,
0.004940641,
-0.035351668,
-0.05995599,
-0.01155203,
0.02074455... | TFEfficientFormerModel
[[autodoc]] TFEfficientFormerModel
- call
TFEfficientFormerForImageClassification
[[autodoc]] TFEfficientFormerForImageClassification
- call
TFEfficientFormerForImageClassificationWithTeacher
[[autodoc]] TFEfficientFormerForImageClassificationWithTeacher
- call |
[
0.056655165,
-0.022862362,
0.0030688215,
0.048443027,
-0.020315742,
-0.004621116,
0.0028989275,
0.0065346584,
0.011452641,
0.045581657,
0.0045495816,
0.036568336,
0.020344354,
-0.04652591,
-0.0148791345,
0.037111994,
-0.028599413,
-0.01618106,
-0.02337741,
-0.01726838,
0.0337... | Google has released the following variants:
google/madlad400-3b-mt
google/madlad400-7b-mt
google/madlad400-7b-mt-bt
google/madlad400-10b-mt
The original checkpoints can be found here.
Refer to T5's documentation page for all API references, code examples, and notebooks. For more details regarding training and eva... |
[
0.028425923,
0.016798366,
-0.0356813,
0.0031471557,
0.009617437,
-0.03486913,
-0.046401933,
-0.020642634,
-0.037738796,
0.060587823,
0.00013229475,
0.024730552,
0.0042198957,
-0.05011084,
0.004070998,
0.032676276,
-0.013996382,
-0.04434444,
-0.038821686,
-0.03400282,
0.030970... |
Mamba is a new state space model architecture that rivals the classic Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of FlashAttention.
Mamba stacks mixer layers, which are the equivalent of Attention layers.... |
[
0.0010061944,
0.024028746,
-0.011040032,
0.040352706,
-0.028585665,
0.02269465,
-0.0118045155,
0.0047817654,
-0.01132484,
0.03357729,
0.02896041,
0.0153795965,
0.008064545,
-0.058430478,
-0.015814302,
0.0452394,
-0.010095671,
-0.057651002,
-0.056571733,
-0.019501809,
-0.02169... |
Peft finetuning
The slow version is not very stable for training, and the fast one needs float32!
python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
model_id = "ArthurZ/mamba-2.8b"
tokenizer =... |
[
0.028606726,
0.00042005634,
-0.028212452,
0.022531996,
0.025175087,
-0.004640014,
-0.040157475,
-0.024196705,
-0.028548315,
0.058994982,
0.012222475,
0.005329992,
-0.03495891,
-0.043136433,
0.0023656404,
0.012331996,
-0.003367752,
-0.03828833,
-0.064544015,
-0.044450674,
0.03... |
Mamba
Overview
The Mamba model was proposed in Mamba: Linear-Time Sequence Modeling with Selective State Spaces by Albert Gu and Tri Dao.
This model is a new paradigm architecture based on state-space-models. You can read more about the intuition behind these here.
The abstract from the paper is the following:
Foundat... |
[
0.036847472,
-0.0073964424,
-0.0007614506,
-0.016452298,
0.0030582126,
-0.016452298,
-0.03664891,
-0.0072014257,
0.0017028482,
0.035571,
0.036932573,
-0.019657658,
0.06501494,
-0.049356893,
0.0069071283,
0.007914122,
0.014282296,
-0.011247131,
-0.041357674,
0.006499367,
-0.00... |
CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [AutoImagePro... |
[
0.021006683,
-0.0020685666,
0.0005649333,
-0.011585416,
0.0070767705,
-0.013684642,
0.00048017077,
-0.017385338,
-0.0030316133,
0.05632561,
-0.004544715,
-0.017039074,
0.066251844,
-0.03159659,
0.013901057,
0.0033850912,
0.027859824,
-0.011823473,
-0.047784433,
0.016461967,
-... |
Convolutional Vision Transformer (CvT)
Overview
The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and ... |
[
0.03566667,
-0.009920934,
-0.02320464,
0.008277589,
-0.02320464,
0.010788255,
-0.01749858,
0.008467791,
-0.009213382,
0.042666104,
0.0012363127,
0.022702506,
-0.0002819744,
-0.041814,
-0.019065844,
0.03131485,
0.0054093436,
-0.035484076,
-0.04449204,
-0.023995878,
0.009099262... |
This model was contributed by ArthurZ.
The original code can be found here.
Usage
A simple generation example:
thon
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/mamba-130m")
tokenizer.pad_token = tokenizer.eos_token
model = Mamb... |
[
0.022758989,
0.01090229,
0.020864854,
0.013743493,
-0.0062256847,
-0.044578254,
0.020380309,
-0.017267466,
0.009661558,
0.02176053,
0.0021584332,
-0.018677054,
0.032126885,
-0.05253656,
-0.028837843,
0.005711772,
0.024550343,
-0.017326199,
-0.053417552,
0.00207584,
-0.0237721... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT.
[CvtForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel f... |
[
-0.006883191,
-0.01790043,
-0.0072070244,
0.0086608315,
0.015557802,
-0.00938429,
-0.0020205162,
0.010803646,
-0.00766177,
0.033513352,
0.0055017294,
-0.0028421583,
0.03569062,
-0.024983434,
-0.04288386,
0.013731931,
0.015268419,
-0.038033247,
-0.042332657,
-0.0136148,
0.0109... | CvtModel
[[autodoc]] CvtModel
- forward
CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
TFCvtModel
[[autodoc]] TFCvtModel
- call
TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call |
[
0.026138563,
-0.0037891462,
0.013047463,
-0.025353098,
-0.0023945805,
-0.014414756,
-0.020218477,
0.0075346553,
0.0019327555,
0.065048225,
0.0110837985,
-0.03886602,
0.047244325,
-0.029818617,
0.00002788393,
0.022007594,
0.0031291365,
-0.029760435,
-0.07354289,
0.0062510003,
... |
DINOv2
Overview
The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba,... |
[
0.039228298,
-0.0076245335,
0.01509112,
-0.0037440625,
-0.021811048,
-0.031560685,
0.02980891,
0.028329954,
0.019427484,
0.048647683,
-0.0036722682,
-0.013403959,
0.055539917,
-0.005574812,
-0.018221343,
0.027927905,
0.019240819,
-0.036500122,
-0.05292661,
0.000120479475,
-0.... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.
Demo notebooks for DINOv2 can be found here. 🌎
[Dinov2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.