vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.025592644,
-0.007026585,
-0.02829744,
0.025151646,
-0.030811133,
-0.013097672,
0.022373352,
0.018080961,
-0.000115532566,
0.007820383,
-0.0063062864,
-0.011745275,
0.040366113,
-0.052596487,
-0.027591841,
0.015905365,
-0.015434966,
-0.024328448,
-0.008254033,
-0.027577141,
... |
thon
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Mili... |
[
-0.023847947,
-0.025223209,
0.0012468033,
0.032129742,
0.00007042786,
-0.03708673,
0.02697629,
-0.010246456,
-0.023908397,
-0.0020950073,
-0.016140435,
0.025223209,
0.030739369,
0.0038575337,
-0.016805397,
-0.027112303,
-0.024724487,
-0.028820047,
-0.03805395,
0.010790516,
-0... | Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
thon |
[
0.01946335,
0.0067320373,
-0.024560204,
0.026567182,
-0.013196959,
-0.030292364,
0.0178029,
0.023275161,
-0.008280586,
0.0058512776,
0.005919861,
-0.023202969,
0.041467905,
-0.041583415,
-0.039504245,
0.008215612,
-0.02054625,
-0.038349148,
-0.0040139547,
-0.018264938,
-0.014... |
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX")
article = "UN Chief Says There Is No Military Solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(**input... |
[
-0.0054859645,
-0.0026110897,
-0.03144012,
0.0064646453,
-0.005065438,
-0.044896983,
0.029360425,
0.008945754,
-0.009427449,
0.007733872,
-0.031378955,
0.0083187865,
0.0322353,
-0.040707003,
-0.035110172,
-0.004075288,
-0.010283794,
-0.014397312,
-0.018075012,
-0.014045599,
-... |
To generate using the mBART-50 multilingual translation models, eos_token_id is used as the
decoder_start_token_id and the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method.
The f... |
[
0.0028375553,
-0.034151107,
-0.038276028,
0.0037733626,
0.00018498517,
-0.0129506355,
0.008417244,
-0.00004789421,
-0.0050858366,
0.03246364,
0.0027086516,
-0.0052030217,
0.016151465,
-0.08030198,
0.006153896,
0.022727227,
-0.031740442,
-0.032276146,
-0.054614987,
-0.011531021,... | Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MBartConfig
[[autodoc]] MBartConfig
MBartTokenizer
[[autodoc]] MBartTokenizer
- build_inputs_with_special_tok... |
[
0.0014673731,
-0.01556098,
-0.018086227,
0.0017762039,
-0.00086592074,
0.007910165,
0.006818166,
0.008490289,
-0.022194872,
0.043543443,
0.024078568,
0.007814615,
0.02293197,
-0.030384861,
-0.020529574,
0.018946175,
-0.01130901,
-0.033742756,
-0.03030296,
-0.034397956,
0.0148... | MBartModel
[[autodoc]] MBartModel
MBartForConditionalGeneration
[[autodoc]] MBartForConditionalGeneration
MBartForQuestionAnswering
[[autodoc]] MBartForQuestionAnswering
MBartForSequenceClassification
[[autodoc]] MBartForSequenceClassification
MBartForCausalLM
[[autodoc]] MBartForCausalLM
- forward
TFMBartModel
[[... |
[
0.022511035,
0.009661875,
-0.013322585,
-0.010588721,
-0.018656952,
-0.007428108,
-0.009175113,
0.01549634,
-0.017310025,
0.019563796,
-0.0017920143,
0.023591245,
0.03384657,
-0.016709909,
-0.0226844,
-0.014562826,
-0.051476654,
-0.0058244634,
-0.09681878,
-0.014136076,
-0.00... | Supervised training |
[
-0.018598361,
0.011032117,
-0.0064712195,
0.012048681,
-0.012007746,
-0.05103292,
0.018161716,
0.013379085,
0.024588589,
-0.009660777,
-0.0105136,
-0.03236633,
0.04942279,
0.0042743483,
0.00086177746,
-0.021804973,
-0.0066554295,
-0.010008729,
-0.035941366,
-0.008958051,
0.00... | Generation |
[
0.04396947,
0.023583358,
-0.0379563,
0.026868602,
0.026120624,
-0.013052979,
-0.037340317,
-0.036958996,
-0.0053385217,
0.040654894,
0.039598923,
0.00029607528,
0.016675547,
-0.020136787,
-0.002793924,
0.014937594,
-0.018846154,
-0.013456301,
-0.060307693,
-0.039774917,
0.059... |
Whisper
Overview
The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to... |
[
0.050348256,
0.016421184,
-0.03697026,
0.014794132,
-0.022658221,
-0.008383843,
-0.045497227,
0.014342172,
0.018063303,
0.023110181,
0.009039184,
0.016692359,
0.019343853,
-0.05839313,
-0.04411122,
0.060020182,
-0.013716962,
-0.022522634,
-0.02458658,
-0.030371659,
0.01303149... | python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_preprocessor True
The script will automatically determine all necessary parameters from the OpenAI checkpoint. A tiktoken library needs to be installed
to perform the conversion of... |
[
0.015920604,
0.029853072,
-0.012293812,
0.007649656,
0.020549228,
-0.0061663217,
-0.0078010955,
-0.016060393,
0.008643723,
-0.007125441,
-0.0008484517,
-0.016464233,
0.047280308,
-0.055419233,
-0.0328974,
0.009870774,
-0.02050263,
-0.074492894,
-0.056382235,
-0.032617822,
-0.... |
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audi... |
[
0.0013896116,
-0.033406124,
-0.008199055,
0.019128818,
0.008108956,
-0.00421735,
0.000018883535,
0.023079336,
-0.0006766126,
0.057885468,
0.039477445,
0.028124906,
0.033877414,
-0.010070353,
0.016065435,
0.03612297,
0.004532698,
-0.050344832,
-0.055224065,
0.007804003,
0.0311... | FlaxMBartModel
[[autodoc]] FlaxMBartModel
- call
- encode
- decode
FlaxMBartForConditionalGeneration
[[autodoc]] FlaxMBartForConditionalGeneration
- call
- encode
- decode
FlaxMBartForSequenceClassification
[[autodoc]] FlaxMBartForSequenceClassification
- call
- encode
- decode
FlaxM... |
[
0.022205513,
0.008769568,
-0.03493199,
0.01695401,
-0.022629729,
-0.037711337,
-0.009281553,
0.0077309706,
0.017992608,
0.032386694,
-0.00063129555,
0.027574038,
0.016383514,
-0.03975928,
-0.04107581,
0.048770208,
0.011461145,
-0.06401273,
-0.025438331,
-0.025248164,
0.034727... | A fork with a script to convert a Whisper model in Hugging Face format to OpenAI format. 🌎
Usage example:
pip install -U openai-whisper
python convert_hf_to_openai.py \
--checkpoint openai/whisper-tiny \
--whisper_dump_path whisper-tiny-openai.pt |
[
0.012862061,
0.015741628,
-0.041938294,
-0.004559313,
0.011104788,
-0.033993647,
-0.034672927,
-0.002525158,
-0.0050613913,
0.04184969,
0.038217008,
-0.024025917,
-0.006604543,
-0.032753218,
-0.009111242,
0.021530293,
-0.0056336126,
-0.047904164,
-0.059511032,
-0.020540904,
0... | The model usually performs well without requiring any finetuning.
The architecture follows a classic encoder-decoder architecture, which means that it relies on the [~generation.GenerationMixin.generate] function for inference.
One can use [WhisperProcessor] to prepare audio for the model, and decode the predicted ID'... |
[
0.029297356,
0.0203458,
-0.034197625,
0.020599006,
-0.008184493,
-0.050790027,
0.021567143,
0.008884531,
0.013099656,
0.025737585,
0.0033810372,
0.027569601,
0.01601152,
-0.046113174,
-0.026348257,
0.027778124,
0.028493056,
-0.046351485,
-0.045338664,
-0.026720619,
0.01113359... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Whisper. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of dupli... |
[
-0.007912521,
-0.012487931,
-0.011557458,
-0.0042570904,
-0.003170955,
-0.017867886,
-0.016790496,
0.017783932,
0.0024923391,
0.040409125,
0.027956175,
0.023982424,
0.045474257,
-0.024458155,
-0.012536903,
0.042479955,
-0.013600301,
-0.065147124,
-0.055828396,
-0.0189033,
0.0... | TFWhisperModel
[[autodoc]] TFWhisperModel
- call
TFWhisperForConditionalGeneration
[[autodoc]] TFWhisperForConditionalGeneration
- call
FlaxWhisperModel
[[autodoc]] FlaxWhisperModel
- call
FlaxWhisperForConditionalGeneration
[[autodoc]] FlaxWhisperForConditionalGeneration
- call
FlaxWhisperForAudioClas... |
[
0.05489266,
-0.012911103,
-0.02838805,
-0.010611398,
0.00038321308,
-0.023092587,
0.008809849,
0.011375692,
-0.022546662,
0.07107931,
0.011327923,
-0.0054285317,
0.0076634083,
-0.041435633,
-0.023747694,
-0.0044185724,
0.0045584654,
-0.051671706,
-0.043346368,
-0.027050534,
0... | LayoutLMv3 architecture. Taken from the original paper.
This model was contributed by nielsr. The TensorFlow version of this model was added by chriskoo, tokec, and lre. The original code can be found here.
Usage tips |
[
-0.008208022,
-0.001708228,
-0.033144776,
0.01888911,
-0.012891214,
-0.012514569,
-0.039114248,
0.005042071,
-0.016288128,
0.025398677,
0.010297338,
0.036953866,
0.028113365,
-0.043804545,
-0.049859293,
0.03729498,
-0.020040367,
-0.040649254,
-0.06674437,
-0.014355156,
-0.002... |
WhisperConfig
[[autodoc]] WhisperConfig
WhisperTokenizer
[[autodoc]] WhisperTokenizer
- set_prefix_tokens
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_decode
- decode
- basic_normalize
- normalize
Whis... |
[
0.03201389,
-0.029070908,
0.0022503035,
-0.0025266567,
-0.0017218232,
-0.018404396,
-0.008943791,
-0.002198263,
-0.033707898,
0.05050442,
-0.0160213,
0.003549522,
0.0015154557,
-0.046427317,
-0.019179622,
0.0024028362,
-0.018519245,
-0.040713627,
-0.019538522,
-0.013164455,
-... |
In terms of data processing, LayoutLMv3 is identical to its predecessor LayoutLMv2, except that:
images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format.
text is tokenized using byte-pair encodin... |
[
-0.0048765037,
0.03352986,
-0.042417623,
-0.0099399,
-0.011310443,
-0.019533703,
-0.012217217,
0.005696753,
-0.005513322,
0.0185231,
0.022966983,
0.027756963,
0.041143987,
-0.0341113,
-0.03181322,
0.028047685,
0.003685931,
-0.049644124,
-0.042777564,
-0.042417623,
0.01569895,... | WhisperModel
[[autodoc]] WhisperModel
- forward
- _mask_input_features
WhisperForConditionalGeneration
[[autodoc]] WhisperForConditionalGeneration
- forward
- generate
WhisperForCausalLM
[[autodoc]] WhisperForCausalLM
- forward
WhisperForAudioClassification
[[autodoc]] WhisperForAudioClassification
... |
[
0.057842672,
-0.030947672,
-0.017471831,
0.019455656,
-0.027050873,
-0.022714797,
0.039874885,
0.00039964335,
0.013652968,
0.055603784,
-0.007219706,
-0.0058026887,
0.0043785856,
-0.03723923,
-0.0103300605,
-0.0003622252,
0.03936476,
-0.050077412,
-0.0392514,
-0.0017880995,
-... | Regarding usage of [LayoutLMv3Processor], we refer to the usage guide of its predecessor.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you're interested in submitting a resource to be included here, please feel free to open a Pull Reque... |
[
0.03081009,
-0.03251694,
-0.004769778,
-0.01757477,
-0.0024789104,
-0.02437324,
0.015824525,
-0.038794678,
-0.012946024,
0.0428159,
0.009517859,
0.014631176,
0.0032943652,
-0.028061772,
-0.034802385,
0.0050048316,
-0.008461926,
-0.010161544,
-0.031099387,
-0.02661529,
-0.0195... |
LayoutLMv3
Overview
The LayoutLMv3 model was proposed in LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
LayoutLMv3 simplifies LayoutLMv2 by using patch embeddings (as in ViT) instead of leveraging a CNN backbone, and pre-trains th... |
[
0.021440577,
-0.026218096,
-0.014902237,
0.029183008,
-0.04466787,
0.01465624,
0.018864082,
0.02257993,
0.0013699764,
0.061317977,
-0.010312451,
0.0108044455,
-0.015407178,
0.008998309,
-0.042000744,
-0.006509209,
-0.007684168,
-0.019731546,
-0.026723038,
0.0060236887,
-0.002... | [LayoutLMv2ForQuestionAnswering] is supported by this notebook.
Question answering task guide |
[
0.036032174,
-0.034052677,
-0.02162733,
0.00835267,
-0.0074565466,
-0.0054469574,
-0.010599665,
0.012565786,
0.002942494,
0.041970663,
0.0020329957,
0.030013436,
0.019942082,
-0.03338393,
-0.020102583,
0.013428472,
-0.014271094,
-0.053178888,
-0.030842684,
-0.024288949,
-0.03... | Document question answering
- Document question answering task guide
LayoutLMv3Config
[[autodoc]] LayoutLMv3Config
LayoutLMv3FeatureExtractor
[[autodoc]] LayoutLMv3FeatureExtractor
- call
LayoutLMv3ImageProcessor
[[autodoc]] LayoutLMv3ImageProcessor
- preprocess
LayoutLMv3Tokenizer
[[autodoc]] LayoutLMv3Tokeniz... |
[
0.028344592,
-0.018842712,
-0.012206159,
0.0006588742,
0.007435087,
0.008931499,
0.008622822,
-0.010011869,
0.0010761751,
0.081544384,
0.0025751034,
-0.008535587,
0.0031186433,
-0.020547144,
-0.022305261,
0.0037309644,
0.0026170432,
-0.052314024,
-0.02410364,
-0.033686046,
-0... | LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use [LayoutLMv2Processor] instead when preparing data for the model!
Demo notebooks for LayoutLMv3 can be found here.
Demo scripts can be found here.
[LayoutL... |
[
0.013900514,
-0.0448128,
-0.0092736855,
0.0061824573,
-0.014755108,
0.0037221597,
0.0062125013,
0.012064471,
0.008212119,
0.06938239,
0.018160133,
0.02894939,
0.0017659394,
-0.01225809,
-0.031299524,
0.023875235,
-0.010368635,
-0.06046256,
-0.051222257,
-0.02478324,
-0.009734... | LayoutLMv3Model
[[autodoc]] LayoutLMv3Model
- forward
LayoutLMv3ForSequenceClassification
[[autodoc]] LayoutLMv3ForSequenceClassification
- forward
LayoutLMv3ForTokenClassification
[[autodoc]] LayoutLMv3ForTokenClassification
- forward
LayoutLMv3ForQuestionAnswering
[[autodoc]] LayoutLMv3ForQuestionAnswerin... |
[
0.006171035,
-0.017458495,
-0.0043263375,
0.018892486,
-0.028735513,
-0.0007918277,
-0.0037972922,
0.0065782606,
0.0074066343,
0.07540288,
0.011179563,
0.009731649,
0.005718562,
-0.006188438,
-0.028763358,
0.026062442,
-0.01085239,
-0.0348613,
-0.024071561,
-0.02698131,
-0.02... | [LayoutLMv2ForSequenceClassification] is supported by this notebook.
Text classification task guide
[LayoutLMv3ForTokenClassification] is supported by this example script and notebook.
A notebook for how to perform inference with [LayoutLMv2ForTokenClassification] and a notebook for how to perform inference when no la... |
[
0.004285598,
-0.042004894,
-0.0010462691,
0.00289169,
0.01657947,
-0.0016083559,
-0.012029162,
0.036348842,
0.008001571,
0.060849465,
0.012994176,
0.020144658,
0.034097146,
-0.012799833,
-0.0352498,
0.020935433,
-0.015024725,
-0.06814068,
-0.035759114,
-0.030719599,
0.0066847... | TFLayoutLMv3Model
[[autodoc]] TFLayoutLMv3Model
- call
TFLayoutLMv3ForSequenceClassification
[[autodoc]] TFLayoutLMv3ForSequenceClassification
- call
TFLayoutLMv3ForTokenClassification
[[autodoc]] TFLayoutLMv3ForTokenClassification
- call
TFLayoutLMv3ForQuestionAnswering
[[autodoc]] TFLayoutLMv3ForQuestionA... |
[
0.06303501,
0.020973045,
-0.026636347,
-0.026665315,
-0.019843282,
-0.045422286,
0.0002783672,
-0.021682767,
-0.029620081,
0.072594546,
-0.0151649015,
-0.006992222,
0.0138323605,
-0.051360786,
0.010950015,
0.00009041276,
0.010725511,
-0.010711026,
-0.060891353,
-0.0004619085,
... | Deformable DETR architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage tips
Training Deformable DETR is equivalent to training the original DETR model. See the resources section below for demo notebooks.
Resources
A list of official Hugging Face a... |
[
0.008101852,
-0.019779302,
-0.01523886,
-0.02819331,
-0.03008043,
-0.026774421,
0.0029796653,
-0.00023012594,
-0.0125358775,
0.059820328,
-0.007853546,
-0.020502934,
0.050455667,
-0.031045275,
0.011592316,
0.016771259,
0.026079165,
-0.009534929,
-0.05031378,
-0.009385945,
-0.... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR.
Demo notebooks regarding inference + fine-tuning on a custom dataset for [DeformableDetrForObjectDetection] can be found here.
See also: Object detection task guide. |
[
0.037531286,
-0.020048134,
-0.01466757,
0.021522263,
0.00531423,
-0.042277977,
-0.0003947898,
-0.0055832583,
-0.019502707,
0.059141997,
-0.024868531,
-0.025561372,
0.019075211,
-0.0139821,
0.015109807,
0.012419525,
0.025207581,
-0.029172985,
-0.041629363,
-0.036027677,
-0.005... |
BLIP
Overview
The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
BLIP is a model that is able to perform various multi-modal tasks including:
- Visual Question Answering
- Image-Tex... |
[
0.021550996,
-0.057584032,
0.0013550028,
-0.002853392,
0.0073915757,
-0.023386344,
0.006749921,
-0.001984111,
-0.026153704,
0.04665798,
-0.04918158,
-0.03951733,
0.007219512,
-0.0060652504,
0.004280088,
0.0038069123,
-0.004685155,
-0.03968939,
-0.075363964,
0.0079221055,
-0.0... | This model was contributed by ybelkada.
The original code can be found here.
Resources
Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset
BlipConfig
[[autodoc]] BlipConfig
- from_text_vision_configs
BlipTextConfig
[[autodoc]] BlipTextConfig
BlipVisionConfig
[[autodoc]] BlipVisionCo... |
[
0.008228369,
-0.0072875703,
-0.00667967,
-0.01616726,
-0.006799079,
-0.018265966,
-0.024388393,
-0.021797579,
0.00049346697,
0.09263247,
0.021290995,
-0.023056801,
0.02448971,
-0.008416529,
0.009067851,
0.014676457,
0.017339641,
-0.022839695,
-0.035402972,
-0.023418648,
0.022... |
Deformable DETR
Overview
The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original... |
[
0.014071829,
-0.0225122,
-0.00961984,
0.00048235556,
-0.027216444,
-0.011753795,
-0.016226236,
-0.00025971353,
0.010956119,
0.07003189,
0.011453814,
-0.014999042,
0.053751115,
-0.028907245,
-0.015299023,
0.03403419,
0.029752646,
-0.029479936,
-0.053342048,
-0.03517957,
-0.005... |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DeformableDetrImageProcessor
[[autodoc]] DeformableDetrImageProcessor
- preprocess
... |
[
0.013126941,
-0.038609467,
0.009596774,
0.013529992,
-0.0023088546,
-0.012258298,
0.013675924,
0.03630235,
0.008519656,
0.03852608,
-0.009033893,
-0.0062090643,
0.043557264,
0.005288302,
-0.0012447662,
0.036441334,
0.016038634,
-0.041166756,
-0.058206066,
-0.029103033,
-0.003... | TFBlipModel
[[autodoc]] TFBlipModel
- call
- get_text_features
- get_image_features
TFBlipTextModel
[[autodoc]] TFBlipTextModel
- call
TFBlipVisionModel
[[autodoc]] TFBlipVisionModel
- call
TFBlipForConditionalGeneration
[[autodoc]] TFBlipForConditionalGeneration
- call
TFBlipForImageTextRetriev... |
[
0.024807649,
-0.032280803,
-0.0062220376,
0.00044317343,
-0.01192082,
-0.0057021366,
0.01149819,
0.020863112,
-0.00074295484,
0.051332645,
-0.030965956,
-0.0071243164,
0.026806751,
0.009351504,
-0.0002674973,
0.033193145,
0.01568423,
-0.053479332,
-0.04628793,
-0.030268284,
-... | BlipModel
[[autodoc]] BlipModel
- forward
- get_text_features
- get_image_features
BlipTextModel
[[autodoc]] BlipTextModel
- forward
BlipVisionModel
[[autodoc]] BlipVisionModel
- forward
BlipForConditionalGeneration
[[autodoc]] BlipForConditionalGeneration
- forward
BlipForImageTextRetrieval
[[a... |
[
0.03979595,
0.0024460736,
-0.03153133,
-0.0032339825,
-0.018760093,
-0.02196413,
-0.015151806,
0.013654592,
0.01817618,
0.013160512,
0.027638571,
0.0095522255,
0.014081298,
-0.054109316,
-0.014792475,
0.047880907,
0.01461281,
-0.031022277,
-0.013265316,
-0.025707167,
0.093605... |
git clone https://github.com/persimmon-ai-labs/adept-inference
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tar
tar -xvf 8b_base_model_release.tar
python src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --inpu... |
[
0.039392035,
-0.017706973,
-0.02555846,
0.006584033,
-0.0043706615,
-0.013340051,
-0.017736884,
-0.01915763,
0.0094741285,
0.0199353,
0.018918347,
-0.0064643915,
0.0012356746,
-0.024496641,
-0.019262316,
0.017991122,
0.0020451255,
-0.033738963,
-0.027996162,
-0.039272394,
0.0... |
Persimmon
Overview
The Persimmon model was created by ADEPT, and authored by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key norm... |
[
0.028773192,
-0.035558604,
0.00941571,
0.018130008,
-0.060138565,
0.022536714,
0.0052987197,
0.018617949,
0.03128913,
0.038638722,
0.035711084,
-0.0053520883,
0.010871906,
-0.048489004,
-0.019959781,
0.043060675,
-0.009453831,
-0.0053177797,
-0.052362025,
-0.022216503,
0.0550... |
The Persimmon models were trained using bfloat16, but the original inference uses float16 The checkpoints uploaded on the hub use torch_dtype = 'float16' which will be
used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.
The dtype of the online weights is mostly irrelevant, unless y... |
[
0.048135474,
-0.024522416,
-0.032161042,
0.03379789,
-0.015216629,
0.0015421235,
-0.0092754755,
0.015580374,
0.025825832,
0.025886457,
-0.024492104,
0.011351848,
-0.027295966,
-0.03946624,
-0.043073367,
0.019793743,
-0.0031050865,
-0.050893866,
-0.034161635,
0.02420414,
0.000... | Perismmon uses a sentencepiece based tokenizer, with a Unigram model. It supports bytefallback, which is only available in tokenizers==0.14.0 for the fast tokenizer.
The LlamaTokenizer is used as it is a standard wrapper around sentencepiece. The chat template will be updated with the templating functions in a follow u... |
[
0.034539927,
-0.00723794,
-0.023981381,
0.02059759,
-0.033613246,
-0.037039157,
-0.0022921313,
0.0059777945,
0.02417795,
0.05223111,
0.012131097,
0.0039454144,
0.03639329,
-0.04470534,
-0.030131172,
0.04771003,
0.025596052,
-0.02848842,
-0.02597515,
-0.011639675,
0.07059624,
... | Tips:
To convert the model, you need to clone the original repository using git clone https://github.com/persimmon-ai-labs/adept-inference, then get the checkpoints: |
[
0.020999327,
-0.014376572,
-0.017755529,
0.04555545,
-0.02365981,
-0.020231059,
0.008963128,
0.02801333,
0.025210574,
0.046295267,
0.0101013025,
0.0021234076,
0.005281843,
-0.0078107254,
-0.042055562,
0.011659179,
-0.017286032,
-0.058274556,
-0.036592323,
-0.027728787,
0.0448... | The authors suggest to use the following prompt format for the chat mode: f"human: {prompt}\n\nadept:"
PersimmonConfig
[[autodoc]] PersimmonConfig
PersimmonModel
[[autodoc]] PersimmonModel
- forward
PersimmonForCausalLM
[[autodoc]] PersimmonForCausalLM
- forward
PersimmonForSequenceClassification
[[autodoc]] P... |
[
0.010693604,
0.01514799,
-0.0129169505,
-0.012170706,
-0.010401261,
-0.022787377,
-0.01954083,
-0.011932216,
0.0033234798,
0.0110398,
0.024695301,
-0.015486493,
0.043174464,
-0.03658905,
-0.010324328,
0.054006547,
0.0077970987,
-0.025864674,
-0.033111706,
-0.006966229,
0.0803... | wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar
tar -xvf 8b_base_model_release.tar
Thereafter, models can be loaded via:
from transformers import PersimmonForCausalLM, PersimmonTokenizer
model = PersimmonForCausalLM.from_pretraine... |
[
0.03022856,
0.0028002781,
0.01218597,
0.00061143236,
-0.017779961,
0.0031564683,
0.006848043,
0.01269153,
0.061612688,
0.028600262,
-0.018331481,
0.009146043,
-0.009658169,
-0.01850219,
0.030622503,
0.0059912167,
-0.0043235244,
-0.023426477,
-0.063346036,
0.025895188,
0.00743... | google/deplot: DePlot fine-tuned on ChartQA dataset |
[
0.050375193,
0.014967574,
0.010298202,
-0.028400011,
-0.019260269,
0.0057460982,
-0.050943762,
0.0057603125,
-0.011307412,
0.045485504,
0.023083894,
-0.014235541,
0.036530547,
-0.039600816,
0.027191803,
0.032721136,
-0.012245551,
-0.022344753,
-0.068739966,
0.025656667,
0.000... |
Fine-tuning
To fine-tune DePlot, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence:
thon
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimi... |
[
0.05055227,
-0.014557432,
-0.008643898,
-0.037062656,
0.009062914,
-0.0035210873,
-0.008441148,
-0.005592514,
0.045172643,
0.06996218,
0.0009900946,
-0.010867387,
0.04346955,
0.0022927618,
0.019680243,
0.050092705,
-0.0012021371,
-0.0056229266,
-0.050525237,
0.010482162,
0.00... | DePlot is a model trained using Pix2Struct architecture. For API reference, see Pix2Struct documentation. |
[
0.021697685,
-0.016722957,
-0.041399855,
0.027150212,
-0.0051820087,
-0.02384778,
0.025955714,
0.006443257,
0.018957369,
0.07672886,
0.021613367,
-0.022948394,
0.031366084,
-0.040725317,
-0.013652396,
0.08656589,
0.016034365,
-0.033614546,
0.015598724,
-0.0053682094,
0.009886... | MaskFormer
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue. |
[
0.04882965,
-0.0126561,
-0.009174899,
-0.012060428,
-0.004699622,
-0.017158454,
-0.0032065732,
-0.005318502,
0.03199611,
0.070242904,
-0.026333353,
0.0041697053,
0.028963594,
0.0005855187,
0.015061998,
0.030077578,
-0.014628781,
-0.005272086,
-0.06721039,
0.023517448,
-0.0032... |
DePlot
Overview
DePlot was proposed in the paper DePlot: One-shot visual language reasoning by plot-to-table translation from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
The abstract of the paper states the... |
[
0.033253465,
0.01399797,
-0.0071904357,
-0.03661121,
-0.014535504,
-0.018482327,
-0.0032675432,
0.01547803,
0.024108024,
0.040116224,
-0.0033209282,
-0.01433669,
0.04270817,
-0.025993075,
0.02335695,
0.036876295,
-0.006623448,
-0.03066152,
-0.071926445,
0.005662514,
-0.020411... |
thon
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/Cha... |
[
0.03494487,
-0.0033536165,
0.0088931555,
0.005586247,
-0.008138359,
-0.024855997,
-0.0009164061,
0.0208055,
0.04767927,
0.056916196,
0.037695024,
0.0006669867,
0.018503742,
-0.021881647,
-0.000662783,
0.0028043333,
0.012457891,
-0.05219311,
-0.08106971,
0.014767122,
0.0500109... | This model was contributed by francesco. The original code can be found here.
Usage tips |
[
0.004767581,
-0.006951533,
-0.0443071,
-0.020626212,
-0.0060558273,
-0.020312179,
-0.0007623312,
-0.0035471374,
-0.00403246,
0.06937259,
0.010020484,
0.015744437,
0.04056726,
-0.038654517,
0.01027742,
0.046334036,
0.009228267,
-0.050302263,
-0.01966984,
0.004674799,
-0.031917... | Resources
All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found here. |
[
0.030258423,
-0.012525478,
-0.057267617,
-0.015883496,
0.012772071,
-0.031883035,
0.011850974,
-0.03388479,
0.002770546,
0.059269372,
0.02228041,
-0.015665913,
0.026893152,
-0.024891395,
-0.005649883,
0.018639537,
-0.0012801082,
-0.01737756,
-0.017334044,
-0.02288964,
0.00991... |
Overview
The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.
The ab... |
[
0.0033893802,
-0.0038858477,
-0.03196634,
-0.016697807,
0.0017462265,
-0.027211241,
-0.01936396,
-0.009977455,
0.0031145192,
0.05571433,
-0.0057308525,
0.03188388,
0.030097283,
-0.0494475,
-0.031911366,
0.043702904,
-0.01607937,
-0.05980976,
-0.065471895,
-0.013289531,
-0.003... |
MaskFormer specific outputs
[[autodoc]] models.maskformer.modeling_maskformer.MaskFormerModelOutput
[[autodoc]] models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput
MaskFormerConfig
[[autodoc]] MaskFormerConfig
MaskFormerImageProcessor
[[autodoc]] MaskFormerImageProcessor
- preprocess
... |
[
0.025720403,
0.0070701162,
-0.057668794,
-0.009439298,
-0.022965714,
-0.028190639,
-0.017441368,
-0.026064739,
-0.008219151,
0.048117213,
0.016857494,
0.0013474018,
0.012426038,
-0.022801032,
0.02305554,
0.02040565,
0.008301492,
-0.03539175,
-0.034194063,
-0.0145893665,
0.014... |
MaskFormer's Transformer decoder is identical to the decoder of DETR. During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter use_auxiliary_loss of [MaskFormerConfig] to T... |
[
0.032367248,
0.018821796,
-0.0016754519,
-0.01849557,
-0.014283006,
0.013424891,
-0.0257009,
0.046777908,
0.012112897,
0.061897755,
0.009673297,
-0.029927649,
0.011566823,
-0.018609041,
-0.04152993,
0.037473388,
-0.009382531,
-0.02249538,
-0.06989737,
-0.011027842,
0.03029642... | This model was contributed by Yih-Dar SHIEH. The original code can be found here.
Kosmos2Config
[[autodoc]] Kosmos2Config
Kosmos2ImageProcessor
Kosmos2Processor
[[autodoc]] Kosmos2Processor
- call
Kosmos2Model
[[autodoc]] Kosmos2Model
- forward
Kosmos2ForConditionalGeneration
[[autodoc]] Kosmos2ForConditionalGe... |
[
0.033449966,
0.015696684,
-0.01773816,
0.0015622964,
-0.0039203907,
-0.023484537,
0.0008416364,
0.00012357548,
-0.00583711,
0.040527083,
-0.001935622,
-0.009935184,
0.012732763,
-0.027643101,
-0.02224453,
0.0049411287,
-0.024633814,
-0.034931928,
-0.033510454,
-0.01905378,
0.... |
KOSMOS-2
Overview
The KOSMOS-2 model was proposed in Kosmos-2: Grounding Multimodal Large Language Models to the World by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei.
KOSMOS-2 is a Transformer-based causal language model and is trained using the next-word prediction task on a web... |
[
0.03638033,
0.033928674,
0.009891166,
-0.008961228,
-0.05788164,
0.01435769,
-0.051484786,
0.04280536,
0.050808467,
0.05743076,
0.017415216,
0.009595277,
0.009038722,
0.0010505841,
-0.040438242,
0.02891264,
-0.040015545,
-0.04424254,
-0.03401321,
-0.03638033,
-0.0111592645,
... | Overview of tasks that KOSMOS-2 can handle. Taken from the original paper.
Example
thon |
[
0.031921614,
0.030450158,
0.0119143,
-0.007072004,
-0.0139788445,
-0.021246042,
-0.010525424,
0.012537417,
-0.0022447233,
0.048347883,
0.012687566,
-0.017432265,
0.045795355,
-0.020840641,
-0.025194952,
0.006553991,
-0.010923319,
-0.04630586,
-0.053212702,
-0.024789551,
-0.01... |
from PIL import Image
import requests
from transformers import AutoProcessor, Kosmos2ForConditionalGeneration
model = Kosmos2ForConditionalGeneration.from_pretrained("microsoft/kosmos-2-patch14-224")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
url = "https://huggingface.co/microsoft/ko... |
[
-0.002392089,
0.05690727,
-0.01747447,
-0.0072356323,
0.004826083,
-0.031093664,
-0.04450333,
-0.000121677884,
-0.010734717,
0.057940934,
0.014541107,
-0.021762772,
0.032127324,
-0.057661567,
-0.0040159156,
0.012103621,
-0.04145822,
-0.035423867,
-0.0431903,
-0.045816362,
-0.... | Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer]. |
[
0.0129615525,
0.032483634,
-0.0040015066,
0.0026994564,
0.024536101,
-0.022941045,
-0.029820586,
-0.019445796,
-0.012177895,
0.044356387,
0.020319609,
-0.035840183,
0.03761555,
-0.0483787,
-0.0034900487,
0.016033767,
-0.031124368,
-0.03420352,
-0.046325933,
-0.033953857,
0.01... |
Wav2Vec2
Overview
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from spe... |
[
0.041893605,
0.047305696,
-0.012320385,
0.017381694,
-0.0052044857,
-0.035078377,
0.011726201,
-0.0029691341,
-0.016307866,
0.044413522,
-0.011497117,
-0.022936963,
0.032214835,
-0.0726194,
0.0014022402,
0.010366018,
0.022478797,
-0.05297551,
-0.036424242,
-0.027719077,
-0.01... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of dupl... |
[
0.007249776,
0.052236795,
0.021577857,
0.0005975749,
-0.02476035,
-0.07341684,
-0.033059526,
-0.01176974,
-0.005675676,
0.054212134,
0.0036900467,
-0.012181269,
0.019190986,
-0.05470597,
-0.00020297829,
-0.0072909286,
-0.0120715285,
-0.04131755,
-0.03256569,
-0.03536409,
-0.0... | A notebook on how to leverage a pretrained Wav2Vec2 model for emotion classification. 🌎
[Wav2Vec2ForCTC] is supported by this example script and notebook.
Audio classification task guide |
[
0.018421903,
0.062105738,
-0.07385547,
0.04350199,
-0.012407159,
-0.028758872,
-0.015078825,
0.021611117,
0.027541935,
0.0071057915,
-0.006172107,
-0.038130686,
0.035053372,
-0.07575781,
-0.0034812083,
-0.010595742,
-0.014183607,
-0.05919628,
-0.015624348,
-0.026590766,
-0.00... | 🚀 Deploy
A blog post on how to deploy Wav2Vec2 for Automatic Speech Recognition with Hugging Face's Transformers & Amazon SageMaker. |
[
0.008152781,
0.035115346,
-0.020005617,
0.021384345,
-0.0058912463,
-0.041502513,
-0.015109728,
0.00050471275,
0.007667413,
0.044288103,
0.018950468,
-0.012746195,
0.028685974,
-0.07732129,
0.0051807794,
0.014117888,
-0.01956949,
-0.053348314,
-0.046482813,
-0.017051201,
-0.0... |
A blog post on boosting Wav2Vec2 with n-grams in 🤗 Transformers.
A blog post on how to finetune Wav2Vec2 for English ASR with 🤗 Transformers.
A blog post on finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers.
A notebook on how to create YouTube captions from any video by transcribing audio with Wav2Vec2. �... |
[
-0.0069952398,
0.040377587,
-0.004303401,
0.005504103,
-0.032500416,
-0.020415476,
-0.04527957,
0.0055395216,
0.00613456,
0.057492018,
0.011482819,
-0.017950317,
0.027442591,
-0.036070645,
-0.019763766,
0.036382332,
-0.028575998,
-0.05454516,
-0.060127188,
-0.026493365,
-0.00... |
Wav2Vec2Config
[[autodoc]] Wav2Vec2Config
Wav2Vec2CTCTokenizer
[[autodoc]] Wav2Vec2CTCTokenizer
- call
- save_vocabulary
- decode
- batch_decode
- set_target_lang
Wav2Vec2FeatureExtractor
[[autodoc]] Wav2Vec2FeatureExtractor
- call
Wav2Vec2Processor
[[autodoc]] Wav2Vec2Processor
- call
... |
[
0.019468576,
-0.021140985,
-0.037279032,
0.012836109,
0.011878404,
-0.004524085,
-0.009155376,
0.0082477005,
0.0043132473,
0.06266537,
0.018239282,
0.014951637,
0.021655574,
-0.065981604,
-0.0154376365,
0.015394755,
0.00088936154,
-0.06438066,
-0.043511264,
-0.004130997,
-0.0... | with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
batch["transcription"] = transcription
return batch |
[
0.015797747,
0.018470077,
-0.028421182,
0.0038054856,
-0.033987306,
-0.009028339,
-0.044765208,
-0.021688683,
0.020861886,
0.03741261,
0.018647248,
-0.01330259,
0.036349583,
-0.0543915,
-0.00500139,
0.02511399,
-0.02552739,
-0.05887983,
-0.061714567,
-0.048662968,
-0.00984037... |
Let's see how to use a user-managed pool for batch decoding multiple audios
from multiprocessing import get_context
from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
import model, feature extractor, tokenizer
model = AutoModelForCTC.f... |
[
0.0058584143,
0.04864944,
0.02510939,
-0.015241186,
-0.00273304,
-0.032344297,
-0.01665093,
0.015866261,
-0.013226317,
0.054341614,
-0.011224749,
0.0001915954,
0.015493876,
-0.05088375,
-0.01142424,
0.016318442,
-0.014868801,
-0.060060382,
-0.08479739,
-0.006822625,
0.0234203... | Wav2Vec2 specific outputs
[[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWa... |
[
-0.012421162,
0.01201598,
-0.0002511632,
0.0071405075,
0.009963499,
-0.0019462035,
-0.01161744,
-0.005367005,
-0.006469632,
0.06488231,
0.0064231358,
-0.009677879,
0.037808158,
-0.04205925,
-0.0116440095,
0.048568737,
-0.009345762,
-0.057442892,
-0.052580707,
-0.048568737,
-0... | Wav2Vec2Model
[[autodoc]] Wav2Vec2Model
- forward
Wav2Vec2ForCTC
[[autodoc]] Wav2Vec2ForCTC
- forward
- load_adapter
Wav2Vec2ForSequenceClassification
[[autodoc]] Wav2Vec2ForSequenceClassification
- forward
Wav2Vec2ForAudioFrameClassification
[[autodoc]] Wav2Vec2ForAudioFrameClassification
- forward... |
[
0.012020416,
0.029758116,
-0.00048482226,
-0.00040982701,
-0.019848157,
-0.009338233,
-0.02913698,
-0.0037691733,
-0.022925608,
0.04017628,
0.0059184493,
-0.028600544,
0.028431142,
-0.015189627,
-0.043253735,
0.0080889,
-0.030718056,
-0.09983368,
-0.05449067,
-0.041700892,
-0... |
note: pool should be instantiated after Wav2Vec2ProcessorWithLM.
otherwise, the LM won't be available to the pool's sub-processes
select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
result = dataset.map(
... |
[
0.0014819701,
-0.015728643,
-0.0027087121,
0.005041992,
0.00016919158,
-0.017941719,
-0.010011531,
0.025924597,
0.006540428,
0.06244034,
0.008984032,
0.017493833,
0.042759776,
-0.021142773,
-0.022091234,
0.04629016,
-0.0043635787,
-0.048450544,
-0.050663617,
-0.014701144,
0.0... | TFWav2Vec2Model
[[autodoc]] TFWav2Vec2Model
- call
TFWav2Vec2ForSequenceClassification
[[autodoc]] TFWav2Vec2ForSequenceClassification
- call
TFWav2Vec2ForCTC
[[autodoc]] TFWav2Vec2ForCTC
- call
FlaxWav2Vec2Model
[[autodoc]] FlaxWav2Vec2Model
- call
FlaxWav2Vec2ForCTC
[[autodoc]] FlaxWav2Vec2ForCTC
... |
[
0.034914665,
-0.005241661,
0.0010492615,
-0.024639525,
-0.009048371,
-0.0012509354,
0.012044667,
-0.011576264,
-0.013591143,
0.0074535673,
0.031107957,
0.020044707,
0.04017863,
-0.057130385,
-0.023271488,
0.024014985,
-0.032594953,
-0.012803036,
-0.05614897,
-0.017873691,
0.0... |
GPT-Sw3
Overview
The GPT-Sw3 model was first proposed in
Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish
by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman,
Fredrik Carlsson, Magnus Sahlgren.
Since that first paper... |
[
0.0051014754,
0.022171263,
-0.012413126,
0.008479292,
-0.016541569,
-0.018668342,
0.00959133,
-0.0050528236,
-0.041228816,
0.04628859,
0.0232694,
0.010390607,
0.006262165,
-0.06905758,
-0.011259387,
0.010265503,
-0.008778152,
-0.030775659,
-0.045788173,
-0.016624972,
0.002059... | from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"]
generated_token_ids = mode... |
[
0.019229723,
-0.03302338,
-0.043692026,
-0.0020221733,
-0.006475315,
0.006300896,
-0.013713715,
-0.026235564,
0.023561135,
0.025886726,
-0.034883853,
-0.03279082,
0.01257999,
-0.027848942,
0.0005282542,
0.033633847,
-0.03825596,
-0.025523352,
-0.032412913,
0.011126496,
0.0144... | Resources
Text classification task guide
Token classification task guide
Causal language modeling task guide
The implementation uses the GPT2Model coupled with our GPTSw3Tokenizer. Refer to GPT2Model documentation
for API reference and examples.
Note that sentencepiece is required to use our tokenizer and can be i... |
[
0.02066396,
0.0077150217,
-0.013857008,
-0.010267628,
0.002497193,
-0.008522989,
-0.019233927,
-0.0035715045,
-0.0015891228,
0.05874571,
-0.0050515872,
-0.010253328,
0.050708927,
-0.050880533,
0.023609824,
0.020049047,
0.008565891,
-0.037066426,
-0.06669668,
-0.008344236,
0.0... |
Video Vision Transformer (ViViT)
Overview
The Vivit model was proposed in ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
The paper proposes one of the first successful pure-transformer based set of models for video understanding.
The abstract... |
[
0.023512866,
-0.0006014594,
-0.013232107,
-0.026743963,
0.0065461164,
-0.011826371,
-0.0292617,
-0.031303864,
-0.011805389,
0.08996714,
0.03231096,
-0.013455906,
0.024701798,
-0.03421325,
-0.012679604,
0.01593168,
0.019512461,
-0.009959049,
-0.0586353,
0.0039059892,
0.0333180... |
ResNet
Overview
The ResNet model was proposed in Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by Nvidia, we apply the stride=2 for downsampling in bottleneck's 3x3 conv and not in the first 1x1. This is generally... |
[
0.03211984,
-0.012349053,
0.009260081,
-0.017085021,
-0.014624777,
-0.023399644,
-0.014693118,
0.012895773,
-0.0047974735,
0.100979306,
0.03818844,
-0.021718478,
0.048275433,
-0.017768422,
-0.020488357,
0.014050721,
0.013558673,
-0.013579174,
-0.049478218,
-0.019053215,
0.006... | [ResNetForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of dup... |
[
0.036132682,
0.00964411,
0.0039929235,
-0.01384068,
0.006967615,
-0.03770367,
0.027637722,
0.000012351459,
-0.02027736,
0.04602408,
0.017833604,
-0.034532607,
0.03709273,
-0.053035334,
-0.026706766,
-0.0106623415,
0.0066221436,
-0.050940685,
-0.057253722,
-0.0074149095,
0.032... | This model was contributed by Francesco. The TensorFlow version of this model was added by amyeroberts. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet.
[ResNetForImageClassification] is supported by this examp... |
[
0.031138338,
-0.014419815,
-0.056015164,
-0.0024608877,
0.022338374,
0.011683928,
0.0051474157,
-0.005464722,
0.013030718,
0.016570447,
0.005729144,
-0.0033475829,
0.0472434,
-0.030658854,
-0.027076816,
0.028035786,
0.008574326,
-0.01253713,
-0.05071262,
0.017952492,
0.000319... | VAN
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
[
-0.011016069,
-0.021344492,
-0.026336795,
-0.02123447,
0.03672023,
0.009235069,
-0.011270498,
0.015554524,
-0.0045865905,
0.08581807,
0.02456267,
0.022073397,
0.050638158,
-0.019529112,
-0.035234917,
0.031466626,
-0.0009128484,
-0.052646082,
-0.035730023,
-0.020010462,
0.0088... | ResNetModel
[[autodoc]] ResNetModel
- forward
ResNetForImageClassification
[[autodoc]] ResNetForImageClassification
- forward
TFResNetModel
[[autodoc]] TFResNetModel
- call
TFResNetForImageClassification
[[autodoc]] TFResNetForImageClassification
- call
FlaxResNetModel
[[autodoc]] FlaxResNetModel
... |
[
0.025613453,
0.038113046,
-0.0022159994,
0.016599461,
0.0045891367,
-0.007371189,
0.013220999,
-0.029856173,
-0.017599428,
0.06485504,
-0.006135515,
-0.005667673,
0.028856207,
-0.043027174,
-0.013799552,
-0.02391351,
0.0029641895,
-0.030684719,
-0.049741242,
0.007985455,
0.02... |
Overview
The VAN model was proposed in Visual Attention Network by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel... |
[
0.016227528,
0.011624027,
0.0050794696,
-0.028900173,
0.009028515,
-0.034329183,
0.0024542091,
0.026907058,
-0.05434958,
0.053814117,
0.0038412092,
-0.0045291316,
0.027353277,
-0.030818919,
-0.023173684,
-0.01228592,
0.008165823,
-0.012903191,
-0.06681399,
0.016272152,
0.0115... | VAN does not have an embedding layer, thus the hidden_states will have a length equal to the number of stages.
The figure below illustrates the architecture of a Visual Attention Layer. Taken from the original paper.
This model was contributed by Francesco. The original code can be found here.
Resources
A list of off... |
[
0.059240654,
0.013603135,
0.015076746,
-0.0007465248,
0.0041801687,
-0.055004947,
0.033056285,
0.008997174,
0.0043727006,
0.036906928,
0.015210038,
-0.026525002,
0.032997046,
-0.048547715,
-0.030390456,
-0.004791088,
0.029472226,
-0.035781354,
-0.06392067,
-0.00059564627,
0.0... | This model was contributed by Francesco. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VAN.
[VanForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
[
-0.016844733,
-0.020367814,
-0.000817981,
-0.010844485,
-0.01608782,
-0.018867752,
-0.0017684216,
0.005298384,
0.015647436,
-0.011188536,
0.017753027,
-0.009516448,
0.013101459,
-0.016844733,
-0.015798818,
-0.0117871845,
-0.036799684,
-0.048552465,
-0.028872753,
0.0026853173,
... | FlauBERT |
[
0.034885082,
-0.0068738395,
0.00015239131,
-0.012952935,
-0.0013167306,
-0.012137276,
-0.005427266,
0.033490796,
-0.0058664666,
0.092803806,
0.028610788,
-0.008853728,
0.03510817,
-0.023200952,
-0.032040738,
0.022629293,
0.02261535,
-0.01685694,
-0.048744306,
-0.027188614,
-0... | [VanForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplic... |
[
0.032804757,
-0.020554323,
-0.04289853,
-0.0061142137,
-0.014245715,
-0.011839641,
-0.022050783,
-0.033362262,
-0.017986866,
0.035357542,
0.01678383,
0.0053146346,
0.031777773,
-0.039113365,
-0.024706265,
0.0064736577,
-0.011678259,
-0.025351798,
-0.044717755,
-0.018676411,
0... |
Overview
The FlauBERT model was proposed in the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le et al. It's a transformer model pretrained using a masked language
modeling (MLM) objective (like BERT).
The abstract from the paper is the following:
Language models have become a key step t... |
[
0.007238002,
-0.029104521,
-0.022991301,
-0.0049661975,
-0.028545309,
-0.0027150451,
-0.0065135667,
0.005817727,
-0.00070457725,
0.027426882,
0.009671852,
-0.0022924577,
0.023703028,
-0.038763665,
-0.008502588,
-0.0049661975,
-0.057904016,
-0.041712243,
-0.05973417,
-0.02503751... | Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FlaubertConfig
[[autodoc]] FlaubertConfig
FlaubertTokenizer
[[autodoc]] FlaubertTokenizer |
[
0.021199277,
0.0118954005,
-0.020596595,
0.045653023,
0.012543282,
-0.04224788,
-0.036944296,
-0.0065805144,
-0.013281565,
0.065631874,
-0.03142977,
-0.008286852,
0.008354655,
-0.029214922,
0.020280188,
-0.016513437,
0.0010160812,
-0.02329359,
-0.03248446,
-0.005439189,
0.017... |
TVP
Overview
The text-visual prompting (TVP) framework was proposed in the paper Text-Visual Prompting for Efficient 2D Temporal Video Grounding by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding.
The abstract from the paper is the following:
In this paper, we study the problem of temporal video grounding (TVG... |
[
-0.0029617338,
-0.001777583,
-0.058156367,
0.0046135746,
0.0093212295,
-0.02518469,
-0.053148378,
0.014170006,
-0.008850826,
0.037574396,
-0.030105835,
-0.02427283,
0.063917,
-0.016485838,
0.0288466,
-0.0013795492,
-0.008669902,
-0.047764063,
-0.03430328,
0.005890901,
-0.0068... | Tips:
This implementation of TVP uses [BertTokenizer] to generate text embeddings and Resnet-50 model to compute visual embeddings.
Checkpoints for pre-trained tvp-base is released.
Please refer to Table 2 for TVP's performance on Temporal Video Grounding task. |
[
-0.00096700515,
-0.0304087,
0.013640233,
-0.0036179028,
0.0065382933,
0.00077897636,
0.011935204,
0.032043274,
0.004424621,
0.056477327,
0.04080797,
0.016557101,
0.04334438,
-0.010194947,
-0.049572665,
0.038384292,
-0.01813531,
-0.05354637,
-0.05805554,
-0.01347114,
0.0148520... | TFFlaubertModel
[[autodoc]] TFFlaubertModel
- call
TFFlaubertWithLMHeadModel
[[autodoc]] TFFlaubertWithLMHeadModel
- call
TFFlaubertForSequenceClassification
[[autodoc]] TFFlaubertForSequenceClassification
- call
TFFlaubertForMultipleChoice
[[autodoc]] TFFlaubertForMultipleChoice
- call
TFFlaubertForTok... |
[
0.006828017,
-0.04702378,
-0.015993552,
-0.016745387,
-0.009883196,
0.027435094,
0.0026655921,
0.024701154,
0.008454712,
0.051015332,
0.038630582,
0.018112358,
0.03641609,
-0.0066776504,
-0.03775572,
0.03083885,
-0.023880972,
-0.07255878,
-0.025986107,
-0.03592398,
-0.0123300... |
FlaubertModel
[[autodoc]] FlaubertModel
- forward
FlaubertWithLMHeadModel
[[autodoc]] FlaubertWithLMHeadModel
- forward
FlaubertForSequenceClassification
[[autodoc]] FlaubertForSequenceClassification
- forward
FlaubertForMultipleChoice
[[autodoc]] FlaubertForMultipleChoice
- forward
FlaubertForTokenCl... |
[
-0.009467148,
-0.0259129,
-0.017439835,
0.018200409,
-0.018533995,
-0.024711993,
-0.050117843,
0.0283414,
-0.010494591,
0.044246744,
-0.009800734,
0.0056008957,
0.041391253,
-0.02097584,
-0.020455446,
0.031303637,
-0.0060045337,
-0.052679777,
-0.072748266,
0.011401943,
0.0026... | TvpConfig
[[autodoc]] TvpConfig
TvpImageProcessor
[[autodoc]] TvpImageProcessor
- preprocess
TvpProcessor
[[autodoc]] TvpProcessor
- call
TvpModel
[[autodoc]] TvpModel
- forward
TvpForVideoGrounding
[[autodoc]] TvpForVideoGrounding
- forward |
[
0.009748757,
0.00045205152,
-0.02828599,
0.030513838,
-0.012329987,
-0.024245134,
-0.04003981,
0.012360716,
-0.025458928,
0.057493847,
-0.041053865,
-0.010939503,
0.030083634,
-0.03278778,
-0.02350764,
-0.018329814,
-0.025505021,
-0.04332781,
-0.076760896,
-0.0026215627,
0.01... |
TVP architecture. Taken from the original paper.
This model was contributed by Jiqing Feng. The original code can be found here.
Usage tips and examples
Prompts are optimized perturbation patterns, which would be added to input video frames or text features. Universal set refers to using the same exact set of promp... |
[
0.03112995,
-0.0069822473,
-0.022773245,
-0.019450111,
-0.018765936,
0.0005501651,
-0.0046090162,
-0.028833078,
-0.017067716,
0.034086563,
-0.0022846549,
0.019059153,
0.01954785,
-0.030005949,
-0.023994986,
-0.003213178,
-0.053707715,
-0.035674825,
-0.052485976,
-0.04041518,
... | Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
[
-0.003992215,
0.0013551711,
-0.015859764,
0.028283017,
-0.016350694,
-0.000084218766,
-0.023291897,
0.005291133,
-0.00682188,
0.057602443,
0.040119883,
0.02991945,
0.03005582,
-0.02858303,
-0.023441903,
0.03384689,
-0.040610813,
-0.061693523,
-0.046256505,
-0.016146138,
0.002... |
FNetConfig
[[autodoc]] FNetConfig
FNetTokenizer
[[autodoc]] FNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FNetTokenizerFast
[[autodoc]] FNetTokenizerFast
FNetModel
[[autodoc]] FNetModel
- forward
FNetForPreTraini... |
[
0.019629363,
-0.0049925623,
-0.014260406,
0.009090299,
0.019174848,
0.0072438316,
-0.03650323,
-0.045849193,
-0.008735209,
0.053604353,
0.05292258,
0.0071195504,
0.031617194,
-0.034741987,
0.0054399758,
-0.0022122094,
-0.0005610419,
-0.04797973,
-0.04303688,
-0.012555975,
0.0... |
FNet
Overview
The FNet model was proposed in FNet: Mixing Tokens with Fourier Transforms by
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT
model with a fourier transform which returns only the real parts of the transform. The model is significant... |
[
-0.0089044105,
0.016590022,
-0.066188015,
-0.021035057,
-0.010990709,
-0.01050319,
-0.03441318,
-0.0141237425,
0.0011614448,
0.06079662,
0.026397778,
-0.009585504,
0.031660125,
-0.047146056,
-0.024591085,
0.024189599,
-0.039632514,
-0.029150832,
-0.042557634,
-0.04794903,
0.0... | UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please
use [Wav2Vec2Processor] for the feature extraction.
UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokeni... |
[
-0.00811113,
-0.0057250694,
-0.064829946,
-0.01446411,
-0.009920991,
-0.038117882,
-0.0059651528,
-0.035665333,
-0.012041113,
0.05691088,
0.03344918,
-0.0170644,
0.021215998,
-0.048814524,
-0.022988923,
-0.00022173101,
-0.014826083,
0.008317972,
-0.036374506,
-0.05265586,
0.0... |
UniSpeech
Overview
The UniSpeech model was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael
Zeng, Xuedong Huang .
The abstract from the paper is the following:
In this paper, we propose a unif... |
[
0.059857506,
-0.020913387,
-0.014865462,
-0.013848054,
0.015473081,
0.005026985,
0.014045883,
-0.023682998,
0.029928753,
0.037813667,
0.038887598,
-0.004952799,
0.013720878,
-0.041996345,
-0.013431199,
0.027017836,
0.005758247,
-0.0020895724,
-0.077153444,
-0.020659035,
0.002... |
CPMAnt
Overview
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchma... |
[
-0.018779209,
0.0015204757,
-0.054950524,
-0.004798168,
-0.016071694,
-0.014431181,
-0.033503816,
-0.0035544455,
-0.009936442,
0.042599995,
0.009689698,
0.015964994,
0.020366373,
-0.049775574,
-0.015151407,
0.024207573,
-0.035797868,
-0.022860486,
-0.07020863,
-0.047294796,
0... | Audio classification task guide
Automatic speech recognition task guide
UniSpeechConfig
[[autodoc]] UniSpeechConfig
UniSpeech specific outputs
[[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput
UniSpeechModel
[[autodoc]] UniSpeechModel
- forward
UniSpeechForCTC
[[autodoc]] UniSpeechForCT... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.