vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.0026585858,
0.029840302,
-0.0076823835,
0.023178348,
-0.011800549,
-0.02497139,
0.004140032,
0.014162117,
-0.027726553,
0.052566744,
0.0025911643,
0.014286026,
-0.020058747,
-0.06361655,
-0.046823177,
-0.01064892,
-0.011552731,
-0.05976807,
-0.01690999,
-0.015408499,
-0.001... |
from transformers import LlamaForCausalLM, CodeLlamaTokenizer
tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf")
PROMPT = '''def remove_non_ascii(s: str) -> str:
"""
return result
'''
input_ids = tokenizer(PROMPT, ... |
[
-0.023071313,
-0.02570888,
-0.013165615,
0.027783373,
-0.020226296,
-0.021545079,
0.021900706,
0.0021207975,
-0.013973184,
0.043119796,
0.010816993,
0.010342823,
0.0032673245,
-0.0510325,
-0.053640433,
-0.026835034,
-0.00069180597,
-0.022715686,
-0.028790982,
-0.012158004,
-0... | Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
If you only want the infilled part:
thon
from transformers import pipeline
import torch
generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=tor... |
[
-0.03607527,
-0.023702348,
0.015720254,
0.016845066,
-0.03927353,
-0.029678755,
0.043447528,
0.00053360773,
-0.010875435,
0.04298676,
0.027185198,
0.029109573,
-0.021425622,
-0.037403364,
-0.059574343,
-0.034964014,
-0.015367904,
-0.01384331,
-0.025640277,
-0.027252957,
-0.01... | Args:
s: The string to remove non-ASCII characters from.
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
If you only want the infilled part:
thon |
[
0.042781428,
0.018266391,
-0.007636427,
0.020039262,
0.002096203,
-0.023003737,
-0.007621895,
-0.021812135,
-0.005162399,
0.055656545,
0.032434832,
0.0027138016,
0.0033277674,
-0.048971947,
-0.0011952352,
-0.005536591,
-0.0043122927,
-0.021724945,
-0.028627519,
0.0013251127,
... |
XGLM
Overview
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian ... |
[
0.031414885,
-0.0030367475,
-0.0053369775,
0.029667968,
-0.006502822,
0.00051815313,
0.0068766326,
0.038698636,
0.0018847821,
0.04509413,
-0.0048336284,
0.027521335,
-0.020800147,
-0.045005303,
-0.023050413,
0.019393733,
-0.002087417,
-0.070113525,
-0.027624965,
0.0027684183,
... |
Under the hood, the tokenizer automatically splits by <FILL_ME> to create a formatted input string that follows the original training pattern. This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need f... |
[
-0.009774134,
0.0053300466,
-0.030579891,
0.03286624,
0.0059980885,
-0.01854799,
-0.024806865,
-0.0010520768,
-0.038953636,
0.055043805,
0.004690584,
0.030722788,
0.018247908,
-0.0569872,
-0.0052371635,
0.022191856,
-0.030751368,
-0.043269116,
-0.048584875,
0.00004379002,
0.0... | Causal language modeling task guide
XGLMConfig
[[autodoc]] XGLMConfig
XGLMTokenizer
[[autodoc]] XGLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XGLMTokenizerFast
[[autodoc]] XGLMTokenizerFast
XGLMModel
[[autodoc]] XGLM... |
[
0.040119804,
0.021610633,
0.0004499251,
0.019263197,
0.03124223,
0.0074406588,
0.004438787,
0.0017988113,
0.0058152378,
0.058728565,
-0.00953201,
0.027130662,
-0.029677274,
-0.049509544,
-0.006544365,
0.026092099,
0.012875327,
-0.012313365,
-0.016517408,
0.022620741,
-0.00709... | Code Llama has the same architecture as the Llama2 models, refer to Llama2's documentation page for the API reference.
Find Code Llama tokenizer reference below. |
[
-0.0006403776,
0.016430588,
-0.014925279,
0.018830562,
-0.0074768406,
-0.017453061,
-0.012709919,
0.007980977,
0.009606995,
0.04001849,
0.00912416,
0.0045443284,
-0.0021958337,
-0.07111874,
-0.028444657,
0.0036993674,
-0.022608034,
-0.062370908,
-0.052884623,
0.0049029044,
0.... | CodeLlamaTokenizer
[[autodoc]] CodeLlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CodeLlamaTokenizerFast
[[autodoc]] CodeLlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- cre... |
[
0.023699336,
-0.02709265,
0.018003417,
-0.0005684979,
-0.0062884036,
-0.017303208,
0.025126683,
0.0013473948,
0.01036846,
0.06431138,
0.014313861,
0.0051640314,
0.011250452,
-0.0019710173,
-0.023012593,
0.01623943,
0.0017437864,
-0.039642528,
-0.026675219,
-0.0034774737,
-0.0... | LayoutLMV2
Overview
The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu,
Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain
s... |
[
0.0018406204,
-0.016179452,
0.0036223687,
0.031084495,
-0.0017445203,
-0.036154427,
0.0062785414,
0.036043607,
-0.0071131405,
0.07374948,
0.02947763,
0.022925507,
0.025169576,
-0.03648688,
-0.0061157774,
0.06543812,
0.012854907,
-0.052139945,
-0.027122745,
0.027953882,
0.0380... | TFXGLMModel
[[autodoc]] TFXGLMModel
- call
TFXGLMForCausalLM
[[autodoc]] TFXGLMForCausalLM
- call
FlaxXGLMModel
[[autodoc]] FlaxXGLMModel
- call
FlaxXGLMForCausalLM
[[autodoc]] FlaxXGLMForCausalLM
- call |
[
0.02848461,
-0.016519418,
-0.036599413,
-0.03397728,
-0.04057401,
-0.0037882875,
-0.005006198,
0.010640327,
0.014725329,
0.06403517,
-0.028512212,
-0.024896434,
0.040021982,
-0.015787981,
-0.0135384705,
0.04060161,
0.00026695698,
-0.03312164,
0.0058652903,
-0.018106496,
0.021... | python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
(If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.)
Usage tips |
[
0.055516865,
-0.029138105,
0.046489,
-0.037251186,
0.0032279862,
-0.012657005,
0.007783158,
0.014576551,
-0.0048588505,
0.056566615,
0.052967466,
0.02184983,
0.017005976,
0.010235079,
-0.029617991,
0.039410677,
-0.02526902,
-0.02024521,
-0.009500252,
0.021144997,
-0.031342585... |
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which t... |
[
0.031241324,
-0.015041569,
-0.011856558,
0.0048183505,
-0.0039942567,
-0.003697286,
0.015709752,
-0.02813798,
0.04415955,
0.0365868,
0.034300122,
-0.023668569,
0.016021572,
-0.030261321,
-0.00408706,
0.04029893,
-0.021352198,
-0.03673528,
-0.04834684,
0.0053751706,
-0.0288655... | However, this model includes a brand new [~transformers.LayoutLMv2Processor] which can be used to directly
prepare data for the model (including applying OCR under the hood). More information can be found in the "Usage"
section below. |
[
0.028951239,
-0.023256866,
0.0068637533,
0.009761782,
-0.019450933,
-0.00901367,
0.0138364555,
-0.025552047,
0.010691476,
0.048750807,
0.014395724,
0.013618559,
0.012311176,
-0.011054638,
-0.01724291,
-0.0019883094,
0.0063335374,
-0.0024967357,
-0.03387571,
-0.014599094,
-0.0... |
The abstract from the paper is the following:
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to
its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this
paper, we present LayoutLMv2 by pre-t... |
[
0.019095596,
-0.013764559,
0.017682726,
0.033792358,
-0.00060083397,
0.0075231693,
0.00054621266,
-0.019925838,
-0.027456291,
0.042823073,
0.019372342,
0.038103797,
0.010880557,
0.017842947,
-0.021848507,
0.029830495,
-0.002108381,
0.008011119,
-0.04643536,
-0.04969807,
-0.02... |
information extraction from scanned documents: the FUNSD dataset (a
collection of 199 annotated forms comprising more than 30,000 words), the CORD
dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the SROIE dataset (a collection of 626 receipts for training and 347 recei... |
[
0.025718775,
-0.00061501417,
0.012210827,
-0.0070819817,
-0.008080914,
0.0023575544,
-0.010742248,
-0.02616606,
-0.016146919,
0.07144601,
0.021573951,
0.024883848,
0.006627244,
-0.0067800656,
-0.03294985,
0.01291157,
-0.007931819,
-0.02743336,
-0.038078696,
-0.015125622,
0.00... |
The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during
pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning).
LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in
the sel... |
[
0.032478422,
-0.0048422883,
-0.015635679,
0.017923485,
-0.00035703107,
-0.036183827,
0.056198616,
-0.007663448,
0.023369305,
0.047833387,
0.008210837,
0.020899037,
-0.007140621,
-0.044071842,
-0.014302296,
0.020141114,
0.025811503,
-0.05872503,
-0.026499249,
0.021320105,
-0.0... | In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on
LayoutXLM's documentation page.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be... |
[
0.031034442,
-0.013968408,
-0.0024813737,
0.00015076845,
0.008943853,
-0.015022764,
0.022614133,
-0.006238883,
0.016709736,
0.063639514,
-0.027951358,
-0.021421619,
-0.0066206325,
-0.017015135,
-0.009511024,
0.04316318,
-0.0047082477,
-0.07411037,
-0.055582773,
-0.011343423,
... |
Usage: LayoutLMv2Processor
The easiest way to prepare data for the model is to use [LayoutLMv2Processor], which internally
combines a image processor ([LayoutLMv2ImageProcessor]) and a tokenizer
([LayoutLMv2Tokenizer] or [LayoutLMv2TokenizerFast]). The image processor
handles the image modality, while the tokenizer h... |
[
0.02058878,
-0.012588922,
-0.009860288,
0.032385293,
-0.009812055,
-0.0135467,
0.0071936697,
-0.028044287,
-0.0019844607,
0.05699812,
0.0033970107,
0.02808563,
-0.0034452442,
-0.011624253,
-0.018935062,
0.010204813,
-0.014063487,
-0.054820724,
-0.05942357,
-0.0117276115,
-0.0... | A notebook on how to finetune LayoutLMv2 for text-classification on RVL-CDIP dataset.
See also: Text classification task guide
A notebook on how to finetune LayoutLMv2 for question-answering on DocVQA dataset.
See also: Question answering task guide
See also: Document question answering task guide
A notebook on how t... |
[
0.027911713,
-0.017837422,
-0.001524107,
-0.005789014,
-0.015941085,
-0.026711687,
-0.0035908183,
0.0013222508,
-0.022178257,
0.045541726,
0.036534123,
0.006437176,
0.0047815847,
-0.030311765,
-0.029378412,
0.04957144,
0.0008148324,
-0.04598618,
-0.060208708,
0.00849648,
-0.0... |
Internally, [~transformers.LayoutLMv2Model] will send the image input through its visual backbone to
obtain a lower-resolution feature map, whose shape is equal to the image_feature_pool_shape attribute of
[~transformers.LayoutLMv2Config]. This feature map is then flattened to obtain a sequence of image tokens. A... |
[
0.03850851,
-0.023952944,
0.01465932,
-0.025034977,
-0.0028329247,
-0.008619206,
0.0070554465,
0.0020732784,
0.0078336205,
0.06847933,
0.0029348284,
-0.0028718335,
0.00015389785,
-0.0144073395,
-0.03139377,
0.048024468,
-0.0067145317,
-0.054990977,
-0.037915614,
-0.019565523,
... |
In short, one can provide a document image (and possibly additional data) to [LayoutLMv2Processor],
and it will create the inputs expected by the model. Internally, the processor first uses
[LayoutLMv2ImageProcessor] to apply OCR on the image to get a list of words and normalized
bounding boxes, as well to resize the... |
[
0.04032155,
-0.032952543,
0.016505985,
-0.044719186,
-0.0066855927,
-0.025405252,
-0.004925053,
-0.012903194,
-0.0060950317,
0.049800236,
0.0075361487,
-0.004237923,
0.016401988,
0.0032740834,
-0.014797445,
0.028777761,
-0.025821244,
-0.039875846,
-0.04593745,
-0.011647788,
-... |
Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False
In case one wants to do OCR themselves, one can initialize the image processor with apply_ocr set to
False. In that case, one should provide the words and corresponding (normalized) bounding boxes thems... |
[
0.03486131,
-0.02919204,
0.00020102318,
-0.013805142,
-0.0025527375,
-0.015700119,
0.021063998,
-0.028346349,
0.001827439,
0.039277703,
0.011260235,
-0.003457158,
0.01459602,
0.0006866355,
-0.03338918,
0.043600127,
-0.03661534,
-0.025167173,
-0.067718014,
-0.014791782,
-0.028... |
Use case 3: token classification (training), apply_ocr=False
For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word
labels in order to train a model. The processor will then convert these into token-level labels. By default, it
will only label the first ... |
[
0.06420941,
-0.027994467,
0.0020233588,
-0.011605822,
-0.0106417835,
-0.021358298,
-0.016889347,
0.006105574,
0.00065483584,
0.056168288,
-0.004409166,
-0.00803365,
0.017083649,
0.010634311,
-0.027889842,
0.02183658,
-0.01744236,
-0.04986094,
-0.026454994,
0.0030228943,
-0.03... |
Use case 5: visual question answering (inference), apply_ocr=False
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to
perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor.
thon
from transformers import Layo... |
[
0.0643907,
-0.033366635,
-0.00047090766,
-0.010121113,
-0.005875201,
-0.023575885,
-0.012215913,
0.0030633635,
-0.0109395115,
0.061627667,
-0.013582413,
-0.004711423,
0.016653284,
0.01902589,
-0.034387756,
0.02916202,
0.0056987572,
-0.043457717,
-0.017389093,
-0.001587055,
-0... |
Use case 4: visual question answering (inference), apply_ocr=True
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the
processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP].
thon
from transformers import LayoutLMv2... |
[
0.018039344,
-0.038321923,
-0.020736566,
0.008659152,
-0.0017575342,
-0.019241078,
-0.0038355305,
0.010488457,
0.012317761,
0.047027808,
0.020843387,
0.019922059,
0.027319392,
-0.028120548,
-0.01759871,
0.04195383,
-0.0060720886,
-0.06959368,
-0.048870467,
-0.015742699,
-0.01... |
LayoutLMv2Config
[[autodoc]] LayoutLMv2Config
LayoutLMv2FeatureExtractor
[[autodoc]] LayoutLMv2FeatureExtractor
- call
LayoutLMv2ImageProcessor
[[autodoc]] LayoutLMv2ImageProcessor
- preprocess
LayoutLMv2Tokenizer
[[autodoc]] LayoutLMv2Tokenizer
- call
- save_vocabulary
LayoutLMv2TokenizerFast
[[autod... |
[
0.03092244,
-0.01932299,
-0.058902584,
-0.0078013386,
0.017385032,
0.0054920577,
0.0048555024,
-0.005552177,
0.0101919575,
0.024302267,
0.007567935,
-0.008183272,
0.0464261,
-0.02857426,
-0.024726637,
0.031120481,
0.008367166,
-0.01418811,
-0.049707893,
0.008395457,
0.0036319... | RetriBERT
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
[
0.01640535,
-0.02283566,
-0.01855855,
0.026526865,
0.026277853,
0.017987294,
0.016258873,
-0.020213734,
-0.01460369,
0.08378446,
0.022864955,
-0.02784515,
0.013387937,
-0.022205811,
0.0006756221,
-0.021956801,
-0.04423585,
-0.045700617,
-0.070835955,
0.011117553,
0.005950601,... |
Overview
The RetriBERT model was proposed in the blog post Explain Anything Like I'm Five: A Model for Open Domain Long Form
Question Answering. RetriBERT is a small model that uses either a single or
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
This model was contributed... |
[
0.016385788,
0.0012540331,
-0.01770486,
-0.016078005,
0.0061959676,
-0.013227346,
-0.03731504,
-0.029034209,
-0.03423721,
0.061556626,
0.020225748,
0.0182618,
0.0005267122,
-0.037373666,
-0.0021911229,
-0.013879553,
-0.036171846,
-0.035585593,
-0.07105394,
0.02078269,
0.02731... |
[BarkSemanticModel] (also referred to as the 'text' model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
[BarkCoarseModel] (also referred to as the 'coarse acoustics' model): a causal autoregressive transformer, ... |
[
0.013204493,
0.013531187,
-0.03835229,
-0.0416952,
-0.0031092858,
-0.019115364,
-0.018310027,
0.0019905507,
-0.023035686,
0.024205705,
0.055249177,
-0.005751324,
0.020953964,
-0.054519817,
0.00803058,
0.0453724,
-0.023689073,
-0.017428715,
-0.033945728,
0.01045419,
-0.0113582... |
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi).
The original code can be found here.
Optimizing Bark
Bark... |
[
0.00711532,
0.009309266,
-0.035956707,
-0.045479365,
0.024500173,
-0.037823893,
-0.018418476,
-0.007808847,
0.00092609186,
0.044118986,
0.0049513825,
0.028167864,
-0.0037643842,
-0.035769988,
-0.02007227,
0.014657426,
-0.01309699,
-0.019112002,
-0.037530478,
0.01627121,
0.021... | Bark
Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark.
Bark is made of 4 main models: |
[
0.022131419,
0.017446885,
-0.03747627,
-0.030269295,
-0.017311756,
-0.021756057,
-0.013362933,
-0.010750405,
0.0056792465,
0.050779145,
0.022701971,
-0.018107524,
0.033212144,
-0.057835974,
0.020855185,
0.051109467,
-0.012927512,
-0.02160591,
-0.023933163,
-0.0005367695,
0.01... |
pip install -U flash-attn --no-build-isolation
Usage
To load a model using Flash Attention 2, we can pass the attn_implementation="flash_attention_2" flag to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly... |
[
0.027545858,
0.010814191,
-0.028773736,
-0.011287589,
-0.010851175,
-0.03568239,
0.005825017,
-0.012175211,
-0.010303808,
0.044233147,
0.022338478,
-0.0011400391,
0.029868469,
-0.053553175,
0.024661088,
0.030889234,
0.0022116574,
-0.028463067,
-0.025814997,
0.01257464,
0.0221... |
To put this into perspective, on an NVIDIA A100 and when generating 400 semantic tokens with a batch size of 16, you can get 17 times the throughput and still be 2 seconds faster than generating sentences one by one with the native model implementation. In other words, all the samples will be generated 17 times faste... |
[
0.011127185,
0.045640316,
-0.018380892,
-0.037690256,
-0.007046975,
-0.05196555,
-0.027723664,
0.00421803,
0.0032006977,
0.038415626,
0.008871282,
0.014362338,
-0.007181169,
-0.046539776,
-0.026301937,
0.011787272,
-0.016436897,
-0.057536397,
-0.04799052,
-0.013397595,
-0.012... | from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
audio_array = model.generate(**inputs)
audio_array = audio_array... |
[
0.00018255947,
0.02740665,
-0.03023862,
-0.018044129,
-0.0069754645,
-0.058589257,
0.0017312925,
-0.021634385,
0.0030215406,
0.033271767,
0.030532649,
0.012233177,
0.029913638,
-0.039245207,
0.008379842,
0.04580671,
-0.00887505,
-0.051346846,
-0.058125,
0.016512081,
0.0300374... |
Using CPU offload
As mentioned above, Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle.
If you're using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to off... |
[
0.00803013,
0.0326856,
-0.009078507,
-0.00042172123,
-0.0155844,
-0.07708924,
-0.012402089,
0.0076806704,
0.0007551482,
0.04122133,
-0.0140229855,
0.0045429715,
-0.00083228946,
-0.026469687,
-0.015115975,
-0.026112791,
-0.00079464825,
-0.08404869,
-0.05748978,
-0.032566637,
-... | The model can also produce nonverbal communications like laughing, sighing and crying.
thon
Adding non-speech cues to the input text
inputs = processor("Hello uh [clears throat], my dog is cute [laughter]")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
To save the audio, si... |
[
0.0139877135,
0.05398617,
-0.011482026,
0.0032335466,
-0.0066913236,
-0.066970184,
-0.014557188,
0.010613578,
0.033257302,
0.04419121,
0.00031765993,
-0.030609246,
0.03493725,
-0.03875273,
0.00738893,
0.0047337557,
0.006958265,
-0.043678682,
-0.06867861,
-0.009695301,
0.00667... | To save the audio, simply take the sample rate from the model config and some scipy utility:
thon
from scipy.io.wavfile import write as write_wav
save audio to disk, but first take the sample rate from the model config
sample_rate = model.generation_config.sample_rate
write_wav("bark_generation.wav", sample_rate, audi... |
[
0.024857068,
0.008509565,
-0.03949435,
-0.011279155,
-0.028914517,
-0.016299035,
-0.020758076,
0.03475835,
0.034315217,
0.003243882,
0.05976775,
0.021048883,
0.01158381,
-0.057773642,
-0.0002390935,
0.021768976,
-0.009471997,
-0.00097022194,
-0.013162476,
-0.019082474,
-0.025... | Find out more on inference optimization techniques here.
Usage tips
Suno offers a library of voice presets in a number of languages here.
These presets are also uploaded in the hub here or here.
thon |
[
-0.010808708,
0.030484613,
-0.0089396415,
-0.020574216,
-0.0045603765,
-0.049551986,
0.017154839,
-0.004161932,
-0.0140831955,
0.04937812,
0.0030028212,
0.04989972,
-0.032252256,
-0.02857208,
-0.061722647,
-0.036309145,
-0.019444084,
-0.081543446,
-0.0374103,
-0.012163418,
-0... | Multilingual speech - simplified Chinese
inputs = processor("惊人的!我会说中文")
Multilingual speech - French - let's use a voice_preset as well
inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
Bark can also generate music. You can help it out by adding music notes around your lyrics.
inpu... |
[
0.0053986856,
-0.012706965,
-0.038873374,
0.014006057,
0.011777014,
-0.023000317,
0.031376977,
-0.06286753,
0.00011469101,
0.053866174,
0.0230997,
-0.01703727,
-0.015816268,
-0.046682123,
-0.025499117,
-0.02609542,
0.021083623,
-0.041542545,
-0.039583262,
-0.01479403,
0.01010... |
ErnieM
Overview
The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning
Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
Hao Tian, Hua Wu, Haifeng Wang.
The abstract from the paper is the following:
Recent studies have demonstrated t... |
[
0.019975552,
0.0002810823,
-0.054489024,
-0.0023472684,
0.014051914,
-0.015636716,
-0.00012018084,
-0.03899318,
-0.015439496,
0.030681772,
0.03330198,
0.0029882328,
0.013974435,
-0.06570238,
-0.04043007,
0.022750717,
-0.016425595,
-0.030597249,
-0.022159059,
-0.007705661,
0.0... | Ernie-M is a BERT-like model so it is a stacked Transformer Encoder.
Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here.
It is a ... |
[
-0.004425133,
0.0063841944,
-0.02505391,
0.025992053,
-0.03363515,
0.0006992952,
-0.022529205,
0.0017900578,
-0.009029617,
0.051652994,
0.023177626,
0.012099273,
0.0090916995,
-0.039843444,
-0.045003224,
0.013244358,
-0.025219465,
-0.055571117,
-0.031151833,
-0.032559045,
-0.... |
ErnieMConfig
[[autodoc]] ErnieMConfig
ErnieMTokenizer
[[autodoc]] ErnieMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
ErnieMModel
[[autodoc]] ErnieMModel
- forward
ErnieMForSequenceClassification
[[autodoc]] ErnieMFo... |
[
-0.0017619882,
-0.0091036055,
-0.018193863,
-0.010665368,
0.009831093,
-0.029179592,
-0.0012497435,
0.023226209,
-0.0026446509,
0.057611674,
0.02845878,
0.046826173,
-0.0012747718,
-0.03716193,
-0.015177126,
0.035506733,
0.00096775865,
-0.043729343,
-0.045037486,
0.019021463,
... |
BarkConfig
[[autodoc]] BarkConfig
- all
BarkProcessor
[[autodoc]] BarkProcessor
- all
- call
BarkModel
[[autodoc]] BarkModel
- generate
- enable_cpu_offload
BarkSemanticModel
[[autodoc]] BarkSemanticModel
- forward
BarkCoarseModel
[[autodoc]] BarkCoarseModel
- forward
BarkFineModel
[[autod... |
[
0.029341945,
-0.0065476736,
-0.027354434,
0.011925069,
-0.02239873,
0.002510541,
0.0058121635,
-0.013977959,
0.0008139644,
0.05170145,
-0.010185996,
-0.003435649,
0.02371938,
-0.038285747,
0.00028337567,
-0.016135456,
-0.035565995,
-0.018096816,
-0.049478576,
-0.034101512,
-0... | Resources
Text classification task guide
Token classification task guide
Question answering task guide
Multiple choice task guide |
[
0.026131123,
0.015420942,
-0.053894546,
-0.002915589,
0.008025477,
-0.040206134,
0.004614112,
-0.016623689,
0.008927537,
0.033676933,
0.017783482,
-0.02306698,
0.041666612,
-0.062371053,
0.019358508,
0.06586475,
-0.0064790864,
-0.061340127,
-0.003352301,
-0.008583895,
0.02505... | Conformer Blocks
Convolution Module
🤗 Transformers Usage
You can run FastSpeech2Conformer locally with the 🤗 Transformers library.
First install the 🤗 Transformers library, g2p-en:
pip install --upgrade pip
pip install --upgrade transformers g2p-en
Run inference via the Transformers modelling code with the mode... |
[
0.010881544,
0.02333889,
-0.0008535588,
-0.0100581525,
-0.020783536,
-0.00892244,
-0.0038294801,
-0.00012410762,
0.02163532,
0.032282624,
0.019378092,
-0.0074531124,
0.01835595,
-0.051674914,
-0.023040764,
0.03407137,
-0.009369627,
-0.076263085,
-0.05738187,
-0.010526634,
-0.... |
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerModel, FastSpeech2ConformerHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["inp... |
[
0.026744198,
-0.0241551,
-0.026361719,
0.016505495,
0.019903686,
-0.027111968,
-0.012312924,
0.018241368,
0.016255412,
0.065315865,
-0.006980265,
-0.010738869,
0.05934329,
-0.049693014,
0.01121697,
0.02018319,
0.0042734817,
-0.016667314,
-0.04695681,
-0.028480072,
0.001814023... |
SegGPT
Overview
The SegGPT model was proposed in SegGPT: Segmenting Everything In Context by Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang. SegGPT employs a decoder-only Transformer that can generate a segmentation mask given an input image, a prompt image and its corresponding prompt mas... |
[
0.012955363,
-0.024718776,
-0.0073077898,
0.0007955203,
0.025130283,
-0.021384154,
-0.031189362,
0.041973673,
0.024945814,
0.062321965,
-0.011479615,
-0.0068572606,
0.030990703,
-0.036553137,
-0.013693237,
0.0412358,
0.019511089,
-0.020518571,
-0.06816819,
0.0030047076,
-0.00... | This model was contributed by EduardoPacheco.
The original code can be found here.
SegGptConfig
[[autodoc]] SegGptConfig
SegGptImageProcessor
[[autodoc]] SegGptImageProcessor
- preprocess
- post_process_semantic_segmentation
SegGptModel
[[autodoc]] SegGptModel
- forward
SegGptForImageSegmentation
[[autodoc]... |
[
0.012738994,
0.018152885,
-0.027004052,
0.00047008198,
-0.00028000536,
-0.03334085,
-0.028893463,
-0.038573064,
0.0111838635,
0.065228306,
0.018080216,
-0.034590766,
0.04121824,
-0.07575087,
0.013640098,
0.016045464,
-0.006231425,
-0.024722224,
-0.023792053,
-0.01603093,
-0.0... |
FastSpeech2Conformer
Overview
The FastSpeech2Conformer model was proposed with the paper Recent Developments On Espnet Toolkit Boosted By Conformer by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shi... |
[
0.005585542,
0.029329378,
-0.0021183502,
-0.0055925855,
-0.027582575,
-0.020200925,
-0.008832622,
-0.000059925274,
0.0243848,
0.03234402,
0.013417979,
-0.009205931,
0.019088043,
-0.04623392,
-0.02238443,
0.03000556,
-0.018975347,
-0.06857608,
-0.05468619,
-0.0143618155,
-0.01... |
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerWithHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSp... |
[
0.043755986,
0.023960987,
-0.03987821,
-0.043310564,
-0.00473914,
0.0072118775,
-0.006589599,
-0.005485874,
0.011168256,
0.011397517,
0.030157568,
-0.0070350193,
0.014397553,
-0.05248098,
-0.015498003,
0.036026634,
-0.012615873,
-0.02931913,
-0.019559188,
-0.03503099,
0.03314... | Run inference via the Transformers modelling code with the model and hifigan combined |
[
0.009342327,
0.022330081,
-0.01451529,
0.0006632241,
-0.030285079,
0.001252654,
-0.02557702,
-0.004877787,
0.014670257,
0.03698558,
0.012884442,
-0.034683205,
0.033089254,
-0.046903502,
-0.010589447,
0.026477307,
-0.018168095,
-0.054548565,
-0.07279045,
-0.023186091,
-0.00023... | Run inference with a pipeline and specify which vocoder to use
thon
from transformers import pipeline, FastSpeech2ConformerHifiGan
import soundfile as sf
vocoder = FastSpeech2ConformerHifiGan.from_pretrained("espnet/fastspeech2_conformer_hifigan")
synthesiser = pipeline(model="espnet/fastspeech2_conformer", vocoder=vo... |
[
-0.02275898,
-0.017348507,
-0.023660725,
0.030820854,
-0.017523473,
-0.03025558,
0.0069515156,
-0.0030534852,
0.013546371,
0.06051116,
0.04584097,
0.029851815,
0.051278364,
-0.04855967,
-0.0074023884,
0.063095264,
-0.02528925,
-0.055612125,
-0.0481559,
-0.013903031,
0.0198518... |
FastSpeech2ConformerConfig
[[autodoc]] FastSpeech2ConformerConfig
FastSpeech2ConformerHifiGanConfig
[[autodoc]] FastSpeech2ConformerHifiGanConfig
FastSpeech2ConformerWithHifiGanConfig
[[autodoc]] FastSpeech2ConformerWithHifiGanConfig
FastSpeech2ConformerTokenizer
[[autodoc]] FastSpeech2ConformerTokenizer
- call
... |
[
0.034857135,
-0.005416695,
0.011222613,
-0.0149202375,
-0.006378942,
-0.04858087,
0.0033426378,
0.013471461,
-0.0073880404,
0.05405883,
-0.015756348,
0.0090098055,
0.023944458,
-0.033790372,
0.024549916,
0.017298827,
0.039008852,
-0.05353987,
-0.04656267,
0.042237967,
0.02524... | Usage of X-CLIP is identical to CLIP.
X-CLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP.
Demo notebooks for X-CLIP ca... |
[
0.019987106,
-0.007874598,
0.006631819,
0.016189117,
-0.0063018776,
-0.044021495,
-0.022333356,
-0.010565451,
0.0012556097,
0.05460894,
0.0059572724,
-0.02151217,
0.028448267,
-0.026131347,
0.020192403,
0.008791101,
0.008952405,
-0.051089566,
-0.041792557,
0.007482334,
0.0104... |
X-CLIP
Overview
The X-CLIP model was proposed in Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
X-CLIP is a minimal extension of CLIP for video. The model consists of a text encoder,... |
[
0.035809007,
0.026644824,
0.0045930534,
0.026644824,
-0.012270066,
-0.01688139,
-0.046624787,
-0.025753252,
-0.027960258,
0.049284887,
0.0064858147,
-0.003884181,
0.04694634,
-0.041070737,
-0.02755101,
0.03300275,
-0.0063031157,
-0.033704314,
-0.05238346,
0.004600361,
0.04484... |
VideoMAE
Overview
The VideoMAE model was proposed in VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
VideoMAE extends masked auto encoders (MAE) to video, claiming state-of-the-art performance on several video classificat... |
[
0.0076769544,
-0.015606047,
0.0047906316,
0.009886484,
-0.0266736,
-0.021126553,
-0.034795117,
0.008453275,
0.0028232879,
0.06513136,
0.003437046,
0.0073584635,
0.028929576,
-0.029964672,
0.014133027,
0.03261876,
0.032698385,
-0.05117085,
-0.05300217,
-0.007199218,
-0.0046280... |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
XCLIPProcessor
[[autodoc]] XCLIPProcessor
XCLIPConfig
[[autodoc]] XCLIPConfig
- from_te... |
[
0.03857795,
0.008814697,
-0.013145618,
-0.0052480567,
0.00028796983,
-0.0019143397,
-0.016028047,
0.004258132,
-0.00969544,
0.054242052,
-0.010416047,
0.013342148,
0.05625102,
-0.044721305,
0.011500596,
0.04041222,
0.012985484,
-0.052145742,
-0.07709767,
0.013553235,
0.010991... |
VideoMAE pre-training. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If
you're interested in submitting a resource to be included her... |
[
0.02972962,
-0.015970573,
-0.013920013,
-0.038127817,
-0.009965861,
-0.002963864,
0.010721699,
-0.017258296,
-0.013493104,
0.038771678,
-0.0087411245,
-0.042382903,
0.06309846,
-0.022913083,
0.024452752,
-0.007124471,
0.0002172815,
-0.027336134,
-0.06052301,
-0.0019543306,
-0... |
Vision Transformer (ViT)
Overview
The Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigol... |
[
0.003208234,
-0.018044317,
-0.02131476,
-0.037311494,
-0.04163417,
-0.022324331,
-0.007856174,
0.013024896,
0.005414006,
0.033756666,
-0.0057730437,
0.0075078006,
0.028381761,
-0.027955182,
0.018442458,
0.0049341037,
0.0045857304,
-0.023063736,
-0.034325436,
0.025580555,
-0.0... | ViT architecture. Taken from the original paper.
Following the original Vision Transformer, some follow-up works have been made: |
[
-0.0054489844,
-0.027865693,
-0.009111529,
-0.04295731,
-0.007290603,
-0.008745965,
0.007980349,
0.0038315328,
-0.011560124,
0.023423735,
0.0015872754,
-0.038046326,
0.05904216,
-0.031645495,
0.0043350463,
0.01965773,
-0.0194646,
-0.006521538,
-0.053606972,
0.010615174,
0.010... | BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. |
[
0.034017928,
-0.003090082,
-0.0053261123,
-0.019444995,
-0.005603589,
0.0101477215,
0.00026846776,
0.0138954595,
0.026983714,
0.056590844,
0.0134053705,
-0.011848617,
0.04947014,
-0.025772905,
0.008908085,
0.010104478,
-0.011826997,
-0.0089225,
-0.062961996,
-0.00011576546,
-... | DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
objects, without having ever been trained to do so. DINO checkpoints can be found ... |
[
0.010083658,
-0.02139738,
0.005438739,
-0.028577516,
-0.026145997,
-0.007966805,
0.0075591677,
-0.005256375,
-0.015461609,
0.027118605,
-0.005056132,
-0.02201241,
0.06665227,
-0.027118605,
0.015761973,
0.0081384415,
-0.019180404,
-0.020224528,
-0.042680334,
-0.010734447,
0.00... |
DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel] or
[ViTForImageClassification]. There are 4 variants available (in 3 different sizes): facebook... |
[
0.018603884,
-0.0041496684,
-0.04648186,
-0.011926816,
-0.04870987,
-0.03999278,
0.00096866215,
-0.012274942,
-0.0069172746,
-0.0012576072,
0.012066066,
-0.004309807,
0.051104978,
-0.042025838,
-0.014746641,
0.028797029,
-0.030802239,
-0.027251348,
-0.017517729,
-0.025998091,
... | MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms
supervised pre-training after fine-tuning. |
[
0.050271478,
-0.027411753,
-0.02002893,
-0.0041750646,
-0.014424244,
0.0010144269,
0.0037305306,
0.012134004,
0.022632122,
0.051608637,
0.023997732,
-0.0057184873,
0.029389042,
-0.05985919,
0.0025053944,
0.040854465,
0.009594825,
-0.036074836,
-0.04128122,
0.012297593,
0.0495... | This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Note that we converted the weights from Ross Wightman's timm library,
who already converted the weights from JAX to PyTorch. Credits go to him!
Usage tips |
[
0.027773287,
-0.030820694,
-0.013879422,
-0.018313326,
-0.015944727,
-0.0037912054,
0.024437025,
0.014507679,
-0.014536564,
0.032553814,
-0.013864979,
-0.01607471,
0.059272785,
-0.04145051,
-0.00077900244,
0.009857134,
0.021302963,
-0.035875633,
-0.032438274,
-0.007755723,
-0... | Resources
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found here.
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request... |
[
0.031488813,
-0.004459501,
-0.016494142,
-0.016536579,
-0.04356943,
-0.03386533,
0.0272026,
0.006436393,
-0.0010423773,
0.014612734,
-0.023369057,
-0.0031014928,
0.049227796,
-0.06128012,
0.009194847,
-0.016225368,
0.007950006,
-0.037769604,
-0.05791339,
0.019210158,
-0.00910... | A blog post on how to Fine-Tune ViT for Image Classification with Hugging Face Transformers
A blog post on Image Classification with Hugging Face Transformers and Keras
A notebook on Fine-tuning for Image Classification with Hugging Face Transformers
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 wit... |
[
0.034033615,
0.0024321766,
-0.043634225,
0.013902004,
-0.04827272,
-0.043418482,
0.025134176,
0.012803058,
-0.006789194,
0.030392937,
-0.0122636985,
-0.0019029295,
0.05272244,
-0.058196943,
-0.003644051,
0.00768588,
0.0057273293,
-0.052803345,
-0.04236673,
0.003856424,
-0.017... | ⚗️ Optimization
A blog post on how to Accelerate Vision Transformer (ViT) with Quantization using Optimum
⚡️ Inference
A notebook on Quick demo: Vision Transformer (ViT) by Google Brain
🚀 Deploy
A blog post on Deploying Tensorflow Vision Models in Hugging Face with TF Serving
A blog post on Deploying Hugging Face... |
[
0.005871813,
-0.045875043,
-0.013028622,
-0.017756307,
-0.0027589619,
-0.016148344,
-0.036227264,
0.023762116,
-0.022044208,
0.042851523,
-0.009764595,
0.00336023,
0.051812135,
-0.019268068,
-0.017879996,
0.03394588,
0.0045868168,
-0.044830553,
-0.04873364,
-0.009489729,
-0.0... | ViTConfig
[[autodoc]] ViTConfig
ViTFeatureExtractor
[[autodoc]] ViTFeatureExtractor
- call
ViTImageProcessor
[[autodoc]] ViTImageProcessor
- preprocess
ViTModel
[[autodoc]] ViTModel
- forward
ViTForMaskedImageModeling
[[autodoc]] ViTForMaskedImageModeling
- forward
ViTForImageClassification
[[autodoc]]... |
[
-0.0031514773,
-0.039788935,
0.005678622,
0.011665902,
0.008831853,
-0.027961688,
-0.003275993,
0.028466767,
-0.009400066,
0.04472748,
0.009049317,
-0.0014082561,
0.042594925,
-0.020708205,
-0.026755113,
0.03605697,
0.019501628,
-0.03557995,
-0.042342387,
0.012914568,
0.00565... | TFViTModel
[[autodoc]] TFViTModel
- call
TFViTForImageClassification
[[autodoc]] TFViTForImageClassification
- call
FlaxVitModel
[[autodoc]] FlaxViTModel
- call
FlaxViTForImageClassification
[[autodoc]] FlaxViTForImageClassification
- call |
[
0.058930546,
0.009269109,
0.0098147625,
0.004365226,
0.011374771,
0.017740725,
-0.0041343723,
-0.0017497629,
-0.020413026,
0.039818693,
0.014263934,
0.015767979,
-0.016621437,
-0.04029439,
-0.013361508,
0.048828967,
-0.007911972,
-0.04180543,
-0.017726734,
-0.018272387,
-0.00... |
LLaMA
Overview
The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guilla... |
[
0.05302024,
0.01764289,
-0.014552331,
0.016101426,
-0.0321418,
0.015109394,
0.022236755,
0.017444482,
-0.007734026,
-0.0032756098,
0.022740401,
0.00016955144,
-0.009790582,
-0.067152865,
-0.017307123,
0.0830864,
0.00095435284,
-0.063734174,
0.009531128,
-0.027257957,
-0.02158... | Weights for the LLaMA models can be obtained from by filling out this form
After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama... |
[
0.018699832,
0.01805003,
-0.013566403,
-0.004743548,
-0.02334952,
0.0036912314,
0.0027833148,
0.015523027,
-0.014389484,
0.027594887,
0.002341089,
-0.004794088,
-0.016663788,
-0.048518483,
-0.009299376,
0.053601373,
-0.013147643,
-0.04525504,
-0.038930308,
0.006988972,
-0.016... | After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path") |
[
0.027887985,
-0.008981131,
-0.005412668,
-0.048938915,
-0.03142646,
-0.015833179,
-0.021650672,
-0.0086137885,
-0.02820285,
0.025533998,
0.038833268,
-0.0050490745,
0.04594021,
-0.012317194,
0.027798023,
0.03895322,
-0.026148735,
-0.029267391,
-0.05574598,
-0.004505559,
-0.03... |
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be
used for classification. The authors also add absolute position embeddings, and... |
[
0.018730689,
-0.024881283,
0.010436927,
0.025174867,
-0.0038936639,
0.002607397,
-0.0057872836,
0.02830154,
-0.010517663,
0.028639164,
-0.011398416,
0.015383826,
-0.016161824,
-0.026261128,
-0.03684485,
0.016851747,
-0.020506874,
-0.06288579,
-0.03493655,
-0.0037652205,
0.001... | The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. |
[
0.031134872,
0.015233956,
-0.008579521,
0.029179469,
-0.03189278,
-0.0007754349,
0.001639924,
0.008890264,
-0.035621688,
0.04171527,
0.028118396,
0.0028440508,
-0.030877182,
-0.04174559,
-0.00055943104,
0.052447252,
-0.013278553,
-0.043594882,
-0.028103238,
-0.023965059,
-0.0... | thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in se... |
[
0.06042051,
0.010293649,
0.0116860615,
-0.0031183485,
-0.0045417445,
-0.029524984,
0.005256176,
0.020908061,
0.008164934,
0.026448553,
-0.007195348,
0.0050885035,
-0.0020175406,
-0.046073556,
-0.015819559,
0.03190156,
0.025340455,
-0.03292218,
-0.0058175153,
0.047910664,
0.02... | This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here. The Flax version of the implementation was contributed by afmck with the code in the implementation based on Hugging Fa... |
[
0.060884092,
0.018188195,
-0.021141103,
0.035663128,
-0.015477798,
0.010121196,
0.0356346,
0.012382238,
-0.0030064017,
0.03865883,
-0.014793066,
0.015406472,
-0.012146861,
-0.04878716,
-0.012118331,
0.016205326,
0.010998509,
-0.05369441,
-0.017959952,
-0.018658949,
-0.0317829... | Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplica... |
[
0.041470982,
0.030077567,
-0.015326233,
0.004527314,
-0.011439145,
0.0037564295,
-0.01567901,
0.012131634,
0.003838091,
0.022538576,
0.0025788706,
0.041523244,
-0.015940327,
-0.0022522244,
-0.039981477,
0.021741562,
-0.008721449,
-0.028901642,
0.028379008,
-0.016175512,
-0.00... | Llama2: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found here. |
[
0.04786046,
0.014861932,
-0.037784573,
0.03842831,
-0.005471766,
0.015547652,
-0.020053811,
0.009593083,
0.012951711,
0.054213863,
-0.008284617,
0.031599097,
-0.03487376,
-0.024238102,
-0.012454914,
0.016709177,
-0.027862623,
-0.03921199,
-0.031599097,
-0.0050589344,
-0.02272... | A notebook on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎
StackLLaMA: A hands-on guide to train LLaMA with RLHF, a blog post about how to train LLaMA to answer questions on Stack Exchange with RLHF. |
[
0.019530965,
0.013972262,
-0.019157028,
0.029684765,
-0.009470648,
0.01462665,
-0.013842823,
0.032705016,
-0.0025905855,
0.029167008,
0.007938948,
0.026966538,
0.018826239,
-0.035408862,
-0.025801584,
0.020681536,
0.006410845,
-0.04774875,
-0.032820076,
0.011563252,
-0.018294... |
⚗️ Optimization
- A notebook on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎
⚡️ Inference
- A notebook on how to run the LLaMA Model using PeftModel from the 🤗 PEFT library. 🌎
- A notebook on how to load a PEFT adapter LLaMA model with LangChain. 🌎
🚀 Deploy
- A notebook... |
[
0.007761685,
0.029674973,
-0.043028712,
-0.0050391466,
0.023362042,
-0.04557628,
-0.0145155415,
-0.0265675,
-0.008447569,
0.06164556,
0.019974617,
-0.015747333,
0.0036358843,
-0.053051013,
-0.02319407,
0.009077462,
-0.029926931,
-0.02933903,
-0.033510324,
-0.029674973,
-0.010... |
XLSR-Wav2Vec2
Overview
The XLSR-Wav2Vec2 model was proposed in Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
This paper presents XLSR which learns cross-lingu... |
[
0.018601608,
-0.00435261,
-0.072855145,
-0.021274809,
0.023020009,
-0.010526598,
-0.016343929,
0.0020949317,
-0.028920444,
-0.008566712,
-0.012576515,
-0.017438142,
0.028588025,
0.00046097153,
-0.039142326,
-0.004477267,
-0.020346807,
-0.0047577457,
-0.05155263,
-0.0054987627,
... |
MVP follows a standard Transformer encoder-decoder architecture.
MVP is supervised pre-trained using labeled datasets.
MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task.
MVP is specially designed for natural language generation and can be adapted to a wide range of... |
[
-0.003259708,
0.046438515,
-0.010039618,
0.0025539072,
0.017981196,
-0.038046353,
-0.0333152,
-0.0025486269,
-0.011391376,
0.06443379,
0.019994752,
-0.024697743,
0.024134511,
-0.049508132,
-0.0038264606,
0.021571804,
-0.04807189,
-0.033371523,
-0.04455169,
-0.022571541,
-0.01... | XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
XLSR-Wav2Vec2's architecture is based on the Wav2Vec... |
[
0.009742985,
-0.002806063,
-0.039750822,
-0.015021305,
0.017803028,
-0.03004261,
-0.009805574,
0.0034754153,
-0.013595671,
-0.00022199453,
0.0020393506,
-0.05068299,
0.034215193,
0.0042143106,
-0.027692053,
0.005320045,
-0.008887605,
-0.016885059,
-0.048596703,
-0.005792938,
... | MVP
Overview
The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract, |
[
0.010532872,
-0.0029388426,
-0.013700933,
0.006291397,
-0.011606285,
0.011978999,
0.002353683,
0.027133513,
0.065478235,
0.06446446,
0.03980576,
0.020946475,
0.014714713,
-0.03485613,
-0.010905585,
0.008311502,
0.026432812,
-0.0432049,
-0.085038215,
0.013052412,
0.05665239,
... | This model was contributed by Tianyi Tang. The detailed information and instructions can be found here.
Usage tips |
[
0.038949627,
-0.0221235,
0.0011378896,
0.01870809,
-0.04207229,
-0.02302963,
-0.038531415,
0.008252748,
0.00952133,
0.04890311,
-0.005952574,
-0.01045534,
-0.025260102,
0.017857721,
-0.037750747,
0.008468825,
-0.014240174,
-0.049377088,
-0.06908889,
0.009263432,
0.01752315,
... | Usage examples
For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
thon |
[
0.032708373,
-0.00043460928,
-0.046701718,
0.00562667,
0.014386819,
-0.052052956,
-0.016726198,
0.011410729,
-0.0059486027,
0.051394783,
0.016382802,
-0.0069465945,
0.035655845,
-0.04658725,
-0.003548415,
0.019687979,
0.009529211,
-0.030218758,
-0.067477114,
0.0059486027,
-0.... |
from transformers import MvpTokenizer, MvpForConditionalGeneration
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
inputs = tokenizer(
"Summar... |
[
-0.0068618366,
-0.0009647265,
0.003757171,
0.011759139,
-0.010678645,
-0.023027144,
-0.0074441805,
-0.024486512,
0.012467775,
-0.008615885,
-0.029748658,
-0.02076793,
0.019308563,
-0.018340329,
-0.012916811,
0.013134313,
-0.014341098,
-0.01988389,
-0.055961154,
-0.018957753,
... | For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
thon |
[
0.037172526,
-0.005555195,
-0.054044954,
-0.011191649,
-0.021038823,
-0.015092105,
-0.031853724,
0.009182323,
-0.0038339708,
0.04650998,
0.021289987,
-0.016104154,
-0.009248808,
-0.0297853,
-0.0004944825,
0.01542453,
0.0048238593,
-0.02826353,
-0.06672143,
0.003392584,
0.0115... |
We have released a series of models here, including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
If you want to use a model without prompts (standard Transformer), you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp').
If you want to use a model with task-spe... |
[
0.028098108,
-0.01782984,
-0.031621113,
-0.022842245,
-0.036461663,
0.008614176,
-0.014041894,
0.002264533,
-0.0051806783,
0.03580289,
0.0034621395,
-0.039211325,
-0.00030432458,
-0.04012788,
-0.009144058,
0.03345422,
-0.013791273,
-0.048892427,
-0.078995496,
0.01634044,
0.02... | For lightweight tuning, i.e., fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the original paper.
thon |
[
0.032476313,
0.0030734388,
-0.018793475,
-0.017203828,
0.01065954,
-0.040825665,
-0.005366911,
-0.0015153631,
0.008780193,
0.029594151,
0.011966912,
-0.0020650537,
0.027826227,
-0.047035683,
0.008668768,
0.012687452,
-0.0048135063,
-0.040795952,
-0.033011146,
-0.011372652,
-0... |
from transformers import MvpTokenizerFast, MvpForConditionalGeneration
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
inputs = tokenizer(
"De... |
[
0.021842401,
-0.007958499,
-0.03956511,
-0.0012146742,
0.007490352,
-0.0122387,
-0.010225668,
-0.025306689,
-0.011202089,
0.052913986,
-0.0054873517,
-0.009837775,
0.014365425,
-0.048125513,
0.0110081425,
-0.002342407,
-0.022711817,
-0.015167963,
-0.05165668,
-0.02830283,
-0.... | Resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide |
[
-0.007480537,
-0.011780219,
-0.04043393,
0.019059107,
-0.0126128355,
0.013790207,
-0.020282013,
0.026851876,
-0.013686131,
0.05542102,
0.020815408,
0.0075716046,
0.035646386,
-0.015936796,
-0.031847574,
0.032498054,
-0.010680906,
-0.0393151,
-0.06343495,
0.006569863,
0.008085... | MvpConfig
[[autodoc]] MvpConfig
MvpTokenizer
[[autodoc]] MvpTokenizer
MvpTokenizerFast
[[autodoc]] MvpTokenizerFast
MvpModel
[[autodoc]] MvpModel
- forward
MvpForConditionalGeneration
[[autodoc]] MvpForConditionalGeneration
- forward
MvpForSequenceClassification
[[autodoc]] MvpForSequenceClassification
- fo... |
[
0.042063195,
0.004693448,
-0.035784643,
-0.018387184,
-0.03272269,
-0.0016227964,
0.00080898195,
-0.0028106563,
0.0021224902,
0.041351832,
0.020010946,
-0.011103446,
0.02812976,
-0.04240341,
0.007361059,
0.038258947,
-0.011350877,
-0.030913355,
-0.069713555,
0.004005282,
0.00... |
from transformers import MvpForConditionalGeneration
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
the number of trainable parameters (full tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
lightweight tuning with randomly initialized prompts
model.... |
[
0.008848848,
-0.021118496,
-0.004519839,
-0.01337222,
0.04684518,
-0.0021026614,
0.012955222,
-0.03463207,
-0.02408696,
0.038646564,
-0.006223171,
-0.014997807,
0.0043502124,
-0.0439898,
-0.047523685,
-0.014517199,
-0.037289552,
-0.045770876,
-0.039664324,
-0.0506335,
-0.0079... |
Overview of MBart
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pret... |
[
0.03157587,
0.00968597,
-0.020104438,
-0.012825245,
-0.047769297,
-0.0015688197,
-0.0021991273,
0.02520576,
-0.015055438,
0.020706132,
-0.016742798,
0.030869534,
0.007750084,
-0.011510674,
-0.026291424,
0.0016718273,
-0.042772617,
0.0028743984,
-0.088213615,
-0.01599722,
-0.0... | Supervised training
thon |
[
0.02346899,
0.0099280765,
-0.02635926,
0.006864391,
-0.015564103,
-0.021272385,
0.025246507,
0.019740542,
-0.0120235225,
0.019104684,
0.008743066,
-0.012767767,
0.043469656,
-0.056504775,
-0.030665763,
0.019191392,
-0.024668453,
-0.04650444,
-0.013627622,
-0.025882365,
-0.007... |
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu e... |
[
0.04107152,
-0.024442825,
-0.01447913,
0.014627841,
-0.04088225,
-0.015425478,
0.003964524,
-0.00096831715,
-0.0171965,
0.009321531,
0.02419948,
0.008023682,
0.03174323,
-0.06278346,
0.0069218623,
0.0131948,
-0.004802718,
-0.04418095,
-0.017831907,
-0.017926542,
0.0109235635,... | MBart and MBart-50 |
[
0.019129671,
-0.008877817,
-0.017068613,
-0.003057236,
-0.008457972,
-0.0042976877,
0.028244128,
-0.030167783,
-0.028641073,
0.01987776,
0.0123281805,
0.010022849,
0.0039083767,
-0.06232029,
-0.02329759,
-0.00804576,
-0.013923592,
-0.051175307,
-0.033251736,
-0.01028239,
-0.0... |
Overview of MBart-50
MBart-50 was introduced in the Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original mbart-large-cc25 checkpoint by ext... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.