vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
-0.0008262996,
-0.014407553,
-0.0025707097,
-0.050741218,
0.016267292,
-0.020536734,
0.00049297564,
-0.004634876,
-0.004276677,
0.052738447,
0.0065886877,
0.01911841,
0.041507646,
-0.057167087,
0.005297001,
0.014617407,
-0.0127793765,
-0.04266546,
-0.032534584,
0.028699325,
-... | Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the image classification section for more details about how patch embeddings are c... |
[
0.014744031,
-0.00079572224,
-0.047452055,
-0.043328244,
0.0024502773,
-0.023542434,
-0.027242564,
-0.025915036,
-0.04539015,
0.043328244,
0.016283398,
0.019842302,
0.012307877,
-0.039628115,
-0.011792401,
0.008854892,
-0.049598698,
-0.02320349,
-0.03332942,
-0.00151377,
0.00... |
BERT is pretrained with two objectives: masked language modeling and next-sentence prediction. In masked language modeling, some percentage of the input tokens are randomly masked, and the model needs to predict these. This solves the issue of bidirectionality, where the model could cheat and see all the words and "p... |
[
0.050940577,
0.01894884,
-0.0054919394,
-0.00047262188,
-0.007188249,
0.012170929,
-0.021425525,
0.005891287,
0.005502931,
0.04540101,
0.06037836,
0.007023381,
0.01329936,
-0.015182519,
-0.0035629827,
0.052494,
-0.015387689,
-0.07368505,
-0.049738873,
-0.034087393,
0.01220023... |
A lightweight decoder takes the last feature map (1/32 scale) from the encoder and upsamples it to 1/16 scale. From here, the feature is passed into a Selective Feature Fusion (SFF) module, which selects and combines local and global features from an attention map for each feature and then upsamples it to 1/8th. This... |
[
0.015063803,
-0.028054217,
-0.051482096,
-0.038167275,
0.02788496,
0.0007166943,
0.017532121,
-0.0249935,
-0.022355923,
0.031425238,
0.001640551,
-0.013674491,
0.024048487,
-0.028844079,
-0.018124519,
-0.02385102,
-0.062399123,
-0.023075262,
-0.049281765,
0.010881764,
0.00620... |
Natural language processing
The Transformer was initially designed for machine translation, and since then, it has practically become the default architecture for solving all NLP tasks. Some tasks lend themselves to the Transformer's encoder structure, while others are better suited for the decoder. Still, other task... |
[
0.00801772,
-0.026705293,
0.0017265691,
-0.018932857,
0.015222936,
0.009136828,
-0.011857947,
-0.015774826,
-0.009665721,
0.06340591,
0.022305511,
-0.017476482,
0.02655199,
-0.019515406,
0.012777762,
-0.0031407846,
-0.052276146,
-0.018334977,
-0.05380917,
-0.023731224,
0.0037... |
To use the pretrained model for text classification, add a sequence classification head on top of the base BERT model. The sequence classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between ... |
[
0.032895092,
-0.0019322227,
-0.0035860743,
-0.014358853,
-0.016956983,
-0.022924677,
0.0037006978,
-0.0064662145,
-0.004792349,
0.031876218,
0.0072012595,
-0.032341987,
0.035864383,
-0.0348164,
-0.005487367,
0.005487367,
-0.019023843,
-0.023288561,
-0.031817995,
-0.0016356575,
... | 💡 Notice how easy it is to use BERT for different tasks once it's been pretrained. You only need to add a specific head to the pretrained model to manipulate the hidden states into your desired output!
Text generation
GPT-2 is a decoder-only model pretrained on a large amount of text. It can generate convincing (thou... |
[
0.00664469,
-0.015898291,
-0.014406392,
-0.0012633327,
0.0017606325,
-0.045996405,
-0.020045007,
-0.0021173495,
-0.026502253,
0.052086413,
0.053371742,
0.0054932497,
-0.006813007,
-0.039386142,
-0.0014125226,
0.010007202,
0.00046095863,
0.000835368,
-0.04204861,
-0.019341135,
... | The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The label is the next token in the sequence, which are created by shifting the logits to the right by one. The cross-entropy loss is calculated between the shifted logits a... |
[
0.011574746,
-0.008234769,
0.003564921,
-0.03256477,
-0.008731447,
-0.021753035,
-0.013230338,
0.0023430223,
-0.024949048,
0.035616815,
0.00669075,
0.0138637815,
0.029109621,
-0.015836095,
-0.036279052,
0.01747729,
-0.03276632,
-0.027367651,
-0.013028787,
0.009451269,
0.01610... |
GPT-2 uses byte pair encoding (BPE) to tokenize words and generate a token embedding. Positional encodings are added to the token embeddings to indicate the position of each token in the sequence. The input embeddings are passed through multiple decoder blocks to output some final hidden state. Within each decoder bl... |
[
0.0024193977,
0.021320311,
-0.01296851,
-0.042425267,
-0.01107068,
-0.05273547,
-0.030419132,
-0.011339876,
-0.017255455,
0.0428829,
0.022693211,
0.021764485,
0.020203149,
-0.06460701,
-0.03572229,
0.010727455,
-0.026865747,
-0.03389176,
-0.047916863,
-0.011131248,
0.00295105... | The input embeddings are passed through multiple encoder layers to output some final hidden states. |
[
0.0015189366,
0.010813513,
0.0023871614,
-0.0051033343,
-0.017664263,
-0.027549233,
-0.028309615,
-0.019916166,
-0.009475532,
0.050243717,
-0.0013187879,
-0.023908172,
0.02721291,
-0.0282365,
0.005998977,
-0.0132116405,
-0.028865278,
0.0114496015,
-0.036556836,
0.02069117,
0.... | GPT-2's pretraining objective is based entirely on causal language modeling, predicting the next word in a sequence. This makes GPT-2 especially good at tasks that involve generating text.
Ready to try your hand at text generation? Check out our complete causal language modeling guide to learn how to finetune DistilGPT... |
[
0.06328428,
-0.024340108,
-0.009987838,
-0.010624317,
0.017205939,
-0.026144633,
0.017793458,
-0.04384017,
0.014534122,
0.0766014,
-0.034020197,
-0.00688937,
0.0036580104,
-0.034719624,
0.0074838838,
-0.067648716,
-0.025515148,
-0.067928486,
-0.08152537,
0.026620245,
-0.00344... | For more information about text generation, check out the text generation strategies guide!
Summarization
Encoder-decoder models like BART and T5 are designed for the sequence-to-sequence pattern of a summarization task. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. |
[
0.04572367,
-0.034737367,
0.010297866,
-0.03640109,
0.025285706,
-0.0059377667,
-0.013073126,
-0.0279534,
-0.010054044,
0.046813697,
0.0061493176,
0.013309776,
0.03215573,
-0.0443468,
0.0029366128,
-0.023206053,
-0.031954937,
-0.026877716,
-0.06924526,
-0.01751211,
-0.0278960... |
BART's encoder architecture is very similar to BERT and accepts a token and positional embedding of the text. BART is pretrained by corrupting the input and then reconstructing it with the decoder. Unlike other encoders with specific corruption strategies, BART can apply any type of corruption. The text infilling cor... |
[
0.039425425,
0.0151194595,
-0.015761968,
0.02612413,
-0.010642405,
-0.02889922,
0.008496152,
0.007505048,
0.0065310323,
0.06736772,
0.00962396,
-0.0056151156,
-0.0020403068,
-0.027053718,
0.0076075764,
-0.031742662,
-0.0019839164,
-0.04500295,
-0.031059142,
0.023964208,
-0.02... | Ready to try your hand at summarization? Check out our complete summarization guide to learn how to finetune T5 and use it for inference!
For more information about text generation, check out the text generation strategies guide! |
[
0.039698713,
-0.005343514,
-0.02666816,
-0.00062735117,
0.017265834,
0.010687028,
0.020527,
-0.017251715,
-0.00045264576,
0.045741048,
0.002625875,
0.011343497,
0.032668144,
-0.052009266,
0.018945828,
0.015501132,
0.0227717,
-0.017223481,
-0.05711984,
-0.02217876,
0.011745849... | [[open-in-colab]]
Let's take a look at how 🤗 Transformers models can be benchmarked, best practices, and already available benchmarks.
A notebook explaining in more detail how to benchmark 🤗 Transformers models can be found here.
How to benchmark 🤗 Transformers models
The classes [PyTorchBenchmark] and [TensorFlowBe... |
[
0.008386384,
-0.021645635,
-0.021452503,
-0.0022154464,
-0.01644593,
-0.042815868,
-0.027424738,
-0.0024308627,
-0.04412322,
0.05357183,
0.038121276,
0.027499018,
0.0012980696,
-0.045133453,
-0.010897099,
0.009047489,
-0.0069676065,
-0.008742935,
-0.040795412,
-0.015079148,
0... | The encoder's output is passed to the decoder, which must predict the masked tokens and any uncorrupted tokens from the encoder's output. This gives additional context to help the decoder restore the original text. The output from the decoder is passed to a language modeling head, which performs a linear transformation... |
[
0.0028986835,
0.014089109,
-0.016652012,
0.028507147,
0.0010827234,
-0.028507147,
0.018420003,
0.0057082823,
0.05794626,
0.0018947659,
-0.003234465,
-0.009059243,
0.031796433,
-0.023600629,
0.004810581,
-0.03812831,
0.011683819,
-0.02099661,
-0.018557057,
-0.012088127,
-0.040... | For more information about text generation, check out the text generation strategies guide! |
[
0.031310793,
-0.0058045285,
-0.019078672,
-0.023776723,
0.021484992,
-0.016858557,
0.0017957871,
-0.027328908,
-0.0042038965,
0.061704893,
-0.0035736703,
0.011730805,
-0.002155661,
-0.039188623,
-0.00762001,
-0.030336807,
-0.038902156,
-0.04583465,
-0.076200105,
-0.00886614,
... |
Translation
Translation is another example of a sequence-to-sequence task, which means you can use an encoder-decoder model like BART or T5 to do it. We'll explain how BART works in this section, and then you can try finetuning T5 at the end.
BART adapts to translation by adding a separate randomly initialized encode... |
[
0.011998623,
-0.01049172,
-0.012430177,
0.0007742331,
0.0014715292,
0.027477978,
-0.0012389497,
-0.01598873,
-0.015040726,
0.05362592,
0.021478666,
-0.005953326,
0.04111792,
-0.044400565,
0.014672844,
0.008447143,
-0.030392738,
-0.02892121,
-0.033816874,
-0.010159211,
0.00778... | The benchmark classes [PyTorchBenchmark] and [TensorFlowBenchmark] expect an object of type [PyTorchBenchmarkArguments] and
[TensorFlowBenchmarkArguments], respectively, for instantiation. [PyTorchBenchmarkArguments] and [TensorFlowBenchmarkArguments] are data classes and contain all relevant configurations for their c... |
[
0.033426605,
-0.008658699,
-0.0058036293,
-0.00763749,
-0.009464159,
0.012988048,
0.052383687,
-0.048356388,
0.0338581,
-0.005846779,
0.018554354,
-0.025846647,
-0.0021646747,
-0.05235492,
0.025256936,
0.007522424,
0.006325021,
-0.029197937,
-0.057878077,
0.008450142,
-0.0095... | Benchmarks
Hugging Face's Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed
and memory complexity of Transformer models. |
[
0.046571933,
0.009614541,
-0.0053178077,
0.008586614,
-0.024697652,
0.000336646,
-0.010594498,
-0.011642983,
-0.015843777,
0.06403298,
0.015446312,
-0.0058797407,
0.038896747,
-0.06781575,
0.014952907,
0.035717025,
-0.010738408,
-0.026780918,
-0.037663236,
-0.010806936,
-0.01... | from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
args = PyTorchBenchmarkArguments(models=["google-bert/bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])
benchmark = PyTorchBenchmark(args)
</pt>
<tf>py
from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments
... |
[
0.05389864,
-0.0017279163,
0.020208359,
0.01382333,
-0.012995238,
0.037714522,
0.0022209582,
-0.019089708,
-0.025176913,
0.058750972,
0.030363385,
0.024290709,
0.031816177,
-0.06369047,
-0.028140612,
0.056891397,
0.0010777912,
-0.014142945,
-0.033966314,
-0.011869323,
0.00861... |
Here, three arguments are given to the benchmark argument data classes, namely models, batch_sizes, and
sequence_lengths. The argument models is required and expects a list of model identifiers from the
model hub The list arguments batch_sizes and sequence_lengths define
the size of the input_ids on which the model i... |
[
0.029285718,
0.03507879,
-0.038469207,
-0.026549358,
-0.082598,
0.001144599,
-0.04284738,
-0.013187916,
-0.009250229,
0.058037512,
0.057183232,
0.04978839,
0.00066406763,
-0.034651652,
-0.058144294,
0.048053138,
-0.058357865,
0.0032235645,
-0.051843993,
-0.0007291396,
0.00234... | Hereby, inference is defined by a single forward pass, and training is defined by a single forward pass and
backward pass. |
[
0.04072573,
-0.022498548,
-0.006014164,
0.008838235,
-0.0029818972,
-0.0063432474,
-0.017542152,
0.010141136,
0.004610523,
0.0616796,
0.011242558,
-0.009576994,
0.015379604,
-0.0672673,
0.0016109968,
0.003465448,
0.0070853634,
-0.05947676,
-0.03989295,
0.00035783605,
0.003707... | results = benchmark.run()
print(results)
==================== INFERENCE - SPEED - RESULT ====================
Model Name Batch Size Seq Length Time in s
google-bert/bert-base-uncased 8 8 0.006
google-bert/bert-base-uncased 8 3... |
[
0.0020293074,
-0.03639684,
-0.044450488,
-0.027057432,
-0.006142673,
0.01123272,
-0.0062309806,
0.018297324,
0.04707852,
0.053182337,
0.019102689,
-0.0057611843,
0.012490219,
-0.042387623,
-0.0056693447,
0.007877033,
-0.004436571,
-0.04439397,
-0.041681163,
0.017265892,
0.007... | python examples/pytorch/benchmarking/run_benchmark.py --help
An instantiated benchmark object can then simply be run by calling benchmark.run().
results = benchmark.run()
print(results)
==================== INFERENCE - SPEED - RESULT ==================== |
[
0.050381005,
-0.02698224,
-0.0019493441,
0.003892052,
-0.033923563,
-0.014227723,
-0.0119183725,
0.026836246,
-0.0066227927,
0.06349387,
0.040214553,
-0.02119559,
0.007492117,
-0.038674988,
-0.020545257,
0.027340587,
-0.0049405503,
-0.061370328,
-0.01682906,
-0.00863352,
0.01... | ==================== INFERENCE - MEMORY - RESULT ====================
Model Name Batch Size Seq Length Memory in MB
google-bert/bert-base-uncased 8 8 1227
google-bert/bert-base-uncased 8 32 1281
google-bert/bert-base-unca... |
[
0.049851738,
0.0039800764,
-0.02140017,
0.02470722,
-0.0044207885,
-0.0074750273,
0.017218528,
-0.021591486,
0.002608402,
0.03320716,
0.018899383,
0.003211392,
0.06674229,
-0.057723064,
-0.0011274036,
0.044522192,
-0.010556597,
-0.023436328,
-0.056083202,
-0.0030320324,
-0.00... | transformers_version: 2.11.0
framework: PyTorch
use_torchscript: False
framework_version: 1.4.0
python_version: 3.6.10
system: Linux
cpu: x86_64
architecture: 64bit
date: 2020-06-29
time: 08:58:43.371351
fp16: False
use_multiprocessing: True
only_pretrain_model: False
cpu_ram_mb: 32088
use_gpu: True
num_gpus: 1
gpu: TI... |
[
0.060991302,
-0.000550336,
0.027535044,
0.002213506,
0.004072573,
-0.004023925,
-0.0072069257,
0.0025123467,
-0.017610751,
0.070721,
0.020960547,
-0.009298811,
0.008589933,
-0.05395812,
0.003421031,
0.043199856,
0.0012431425,
-0.053151947,
-0.029995266,
-0.006894185,
0.009826... | Model Name Batch Size Seq Length Time in s
google-bert/bert-base-uncased 8 8 0.005
google-bert/bert-base-uncased 8 32 0.008
google-bert/bert-base-uncased 8 128 0.022
google-bert/bert-base-uncased ... |
[
-0.015506236,
-0.015127692,
-0.035975587,
-0.035190463,
-0.016501663,
0.008398043,
-0.01899724,
0.00806156,
0.030619908,
0.060398612,
0.019347744,
-0.0075708563,
0.0027812382,
-0.06348304,
0.0036872874,
0.0052400143,
-0.0077110576,
-0.062024944,
-0.049070366,
0.012421813,
0.0... | An instantiated benchmark object can then simply be run by calling benchmark.run().
results = benchmark.run()
print(results)
results = benchmark.run()
print(results)
==================== INFERENCE - SPEED - RESULT ==================== |
[
0.050972573,
-0.02580603,
-0.0012831406,
0.00631495,
-0.03370638,
-0.015747407,
-0.014028781,
0.02749801,
-0.0063316035,
0.06384228,
0.039808165,
-0.023847597,
0.006258329,
-0.03778312,
-0.02127632,
0.027311493,
-0.006578073,
-0.059898768,
-0.016946448,
-0.009579007,
0.016373... | ==================== INFERENCE - MEMORY - RESULT ====================
Model Name Batch Size Seq Length Memory in MB
google-bert/bert-base-uncased 8 8 1330
google-bert/bert-base-uncased 8 32 1330
google-bert/bert-base-unca... |
[
-0.01676489,
-0.014177452,
-0.039574265,
-0.008075944,
-0.0105635915,
-0.029823257,
-0.021455068,
0.0032200422,
0.009173646,
0.042197343,
0.034385134,
0.006785789,
0.012167376,
0.011226489,
0.012951449,
-0.030450515,
-0.034470666,
-0.01215312,
-0.07281893,
0.017406404,
0.0003... | ==================== ENVIRONMENT INFORMATION ==================== |
[
0.022297645,
0.017633628,
-0.0214993,
0.016176997,
-0.012024201,
-0.05084199,
0.02341813,
-0.00571097,
0.0035102684,
0.025799159,
0.011947168,
-0.008088498,
0.0635875,
-0.04568776,
0.019426404,
0.04748054,
-0.012654474,
-0.02402039,
-0.06694896,
-0.005311798,
-0.011961174,
... | ==================== ENVIRONMENT INFORMATION ====================
transformers_version: 2.11.0
framework: Tensorflow
use_xla: False
framework_version: 2.2.0
python_version: 3.6.10
system: Linux
cpu: x86_64
architecture: 64bit
date: 2020-06-29
time: 09:26:35.617317
fp16: False
use_multiprocessing: True
o... |
[
0.043342117,
-0.00015114478,
-0.02040581,
0.008687549,
-0.02114142,
0.0055612084,
-0.026349535,
-0.008746398,
-0.014800465,
0.04563722,
0.0208766,
-0.01999387,
0.015241831,
-0.05405259,
0.0035070188,
0.0119463,
-0.005075706,
-0.013292465,
-0.036751054,
-0.021568073,
-0.000356... |
By default, the time and the required memory for inference are benchmarked. In the example output above the first
two sections show the result corresponding to inference time and inference memory. In addition, all relevant
information about the computing environment, e.g. the GPU type, the system, the library version... |
[
0.037266884,
-0.012082516,
-0.022615645,
-0.009663295,
-0.0040365662,
0.00485883,
-0.018660625,
0.006907693,
-0.005086481,
0.05161912,
0.02533387,
-0.021582719,
0.033325456,
-0.06719456,
0.018714989,
0.020930344,
-0.016948141,
-0.03490203,
-0.023322383,
0.015996763,
-0.007196... |
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
args = PyTorchBenchmarkArguments(
models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
)
config_base = BertConfig()
config_384_hid = BertConfig(hidden_size=384)
config_6_lay = Bert... |
[
0.06772736,
-0.006271445,
0.039450902,
-0.011777129,
0.011777129,
-0.013358285,
-0.019214943,
0.0048214616,
-0.014790542,
0.08156779,
0.01825065,
-0.008862981,
0.014748001,
-0.045945693,
-0.0021714303,
0.038600054,
0.004201053,
-0.057942625,
-0.04086898,
-0.0073669096,
0.0186... |
Model Name Batch Size Seq Length Time in s
bert-base 8 128 0.006
bert-base 8 512 0.006
bert-base 8 128 0.018
bert-base 8 512 0.08... |
[
0.052186146,
-0.026187614,
0.005496833,
0.006307177,
-0.024215776,
-0.021784745,
-0.021676699,
0.029415483,
-0.006178872,
0.07131027,
0.03284594,
-0.020001989,
0.012965502,
-0.036627546,
-0.023243364,
0.025336752,
-0.0019380725,
-0.068447046,
-0.01974538,
-0.0119525725,
0.018... |
==================== INFERENCE - MEMORY - RESULT ====================
Model Name Batch Size Seq Length Memory in MB
bert-base 8 8 1277
bert-base 8 32 1281
bert-base 8 128 ... |
[
0.029560618,
-0.0061169444,
-0.023263764,
0.01220621,
-0.007542386,
-0.03443203,
0.023609744,
-0.0009549076,
0.014669595,
0.020080738,
0.018738331,
-0.017423604,
0.061335515,
-0.04353825,
0.011251302,
0.042984683,
-0.013735446,
-0.019610204,
-0.058401596,
-0.0046430654,
-0.01... | ==================== ENVIRONMENT INFORMATION ====================
transformers_version: 2.11.0
framework: PyTorch
use_torchscript: False
framework_version: 1.4.0
python_version: 3.6.10
system: Linux
cpu: x86_64
architecture: 64bit
date: 2020-06-29
time: 09:35:25.143267
fp16: False
use_multiprocessing: T... |
[
0.011597589,
0.022659611,
-0.02235357,
-0.013325432,
-0.028895147,
-0.008945254,
0.000026175667,
0.004443936,
-0.032695126,
0.019280434,
-0.005355676,
-0.022659611,
0.03302667,
-0.06855265,
0.04738498,
0.023513967,
-0.022315316,
-0.027288444,
-0.04457963,
-0.006630837,
-0.013... | from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig |
[
0.052435037,
-0.023167392,
0.0005011525,
0.010024744,
-0.023357179,
-0.019453019,
-0.024197657,
0.02917275,
-0.0037008182,
0.07482973,
0.029579433,
-0.020307053,
0.010824555,
-0.040126085,
-0.022936938,
0.025648162,
-0.0032195763,
-0.06626227,
-0.019303901,
-0.01108212,
0.017... |
==================== INFERENCE - MEMORY - RESULT ====================
Model Name Batch Size Seq Length Memory in MB
bert-base 8 8 1330
bert-base 8 32 1330
bert-base 8 128 ... |
[
0.067902185,
-0.007047123,
0.038010463,
-0.012795531,
0.013228436,
-0.012987144,
-0.020424591,
0.0041906605,
-0.01588264,
0.08181191,
0.018480068,
-0.012206497,
0.0127884345,
-0.044056937,
-0.0006001232,
0.036903363,
0.0050209863,
-0.057597633,
-0.038095623,
-0.0074871243,
0.... |
Model Name Batch Size Seq Length Time in s
bert-base 8 8 0.005
bert-base 8 32 0.008
bert-base 8 128 0.022
bert-base 8 512 0.106
b... |
[
0.048527155,
-0.028713487,
-0.04134198,
-0.025379457,
0.002447791,
0.01939181,
-0.0029410913,
-0.0019578924,
-0.012050139,
0.06357792,
0.041287545,
-0.0226578,
0.012471996,
-0.03911022,
0.039246302,
0.021351404,
0.0022300582,
-0.041641362,
-0.04537003,
-0.002645111,
-0.009063... | Again, inference time and required memory for inference are measured, but this time for customized configurations
of the BertModel class. This feature can especially be helpful when deciding for which configuration the model
should be trained.
Benchmark best practices
This section lists a couple of best practices one s... |
[
0.04161998,
0.009539048,
-0.013350577,
-0.0069957566,
-0.016200699,
-0.02143729,
-0.0050047617,
0.018641714,
-0.040283557,
0.055284206,
0.02716481,
0.004841118,
0.02161457,
-0.06562102,
0.013848325,
0.011134571,
-0.01686891,
-0.026592057,
-0.016391616,
-0.004991125,
0.0055570... | args = TensorFlowBenchmarkArguments(
models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
)
config_base = BertConfig()
config_384_hid = BertConfig(hidden_size=384)
config_6_lay = BertConfig(num_hidden_layers=6)
benchmark = TensorFlowBenchmark(args, configs=[confi... |
[
-0.006459131,
0.012176578,
0.00039109425,
0.010988443,
-0.025793318,
-0.00600908,
0.023345042,
-0.019787839,
0.018491693,
0.034794338,
0.002316862,
0.019485405,
0.040641397,
-0.045566756,
-0.0060162805,
0.0013870569,
-0.018146053,
-0.039748497,
-0.08727387,
0.002837121,
0.005... |
Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user
specifies on which device the code should be run by setting the CUDA_VISIBLE_DEVICES environment variable in the
shell, e.g. export CUDA_VISIBLE_DEVICES=0 before running the code.
The option no_multi... |
[
0.048509978,
-0.008787826,
-0.03927981,
0.02267731,
-0.019005885,
0.0036843256,
-0.0045376737,
-0.009930538,
0.0015703079,
0.0805649,
0.027749477,
0.0003582495,
0.011537707,
-0.06929997,
-0.017457694,
0.020273928,
-0.0018725736,
-0.004898918,
-0.02876686,
-0.008161177,
-0.005... | Sharing your benchmark
Previously all available core models (10 at the time) have been benchmarked for inference time, across many different
settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were
done across CPUs (except for TensorFlow XLA) and GPUs.
The a... |
[
0.056708075,
-0.010362992,
-0.022822147,
0.005556697,
-0.005203202,
-0.010790907,
-0.02416171,
-0.019411229,
0.010598655,
0.038673617,
0.005776856,
0.015504177,
0.02691525,
-0.06658114,
-0.0065799723,
-0.011373864,
-0.016955368,
-0.0029612978,
-0.030313766,
-0.006604779,
0.00... | PyTorch Benchmarking Results.
TensorFlow Benchmarking Results. |
[
0.022246476,
0.017380496,
-0.021882927,
0.016136037,
-0.011927243,
-0.05095296,
0.02335111,
-0.0056455154,
0.003841747,
0.025364619,
0.01211601,
-0.008375638,
0.0633696,
-0.04561157,
0.01967366,
0.047513217,
-0.012885058,
-0.024050245,
-0.066669516,
-0.0055686105,
-0.01258443... | ==================== ENVIRONMENT INFORMATION ====================
transformers_version: 2.11.0
framework: Tensorflow
use_xla: False
framework_version: 2.2.0
python_version: 3.6.10
system: Linux
cpu: x86_64
architecture: 64bit
date: 2020-06-29
time: 09:38:15.487125
fp16: False
use_multiprocessing: True
o... |
[
0.0062150927,
-0.014501884,
0.003940729,
0.0062000803,
-0.0048677386,
-0.03152583,
-0.010193353,
-0.009285108,
-0.0059110937,
0.037380632,
-0.017819602,
0.0021542653,
0.021737812,
-0.0012281939,
-0.014674525,
-0.015537732,
-0.0010874536,
-0.012122434,
-0.07338013,
0.025220666,
... |
Text generation strategies
Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and
more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text
and vision-to-text. Some of the models that can generate ... |
[
0.0026450132,
0.011036335,
0.010736899,
-0.010779676,
-0.03316604,
-0.017267443,
0.0080633685,
-0.004619861,
0.0021976423,
0.07112305,
0.007977815,
-0.015969891,
0.03108425,
-0.032110885,
0.0023331011,
0.04597047,
-0.0073718154,
-0.010437464,
-0.028959684,
-0.0020996127,
0.00... | from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
model.generation_config
GenerationConfig {
"bos_token_id": 50256,
"eos_token_id": 50256,
} |
[
-0.0047370153,
-0.002993013,
0.0030639796,
0.02151705,
-0.018806128,
-0.024057651,
0.027818875,
0.030771084,
-0.04033737,
0.054530676,
0.015882308,
0.0075437413,
0.01456233,
0.027080825,
-0.040081892,
-0.014491363,
0.0050102365,
-0.020225458,
-0.049477857,
0.0076785777,
-0.02... |
Printing out the model.generation_config reveals only the values that are different from the default generation
configuration, and does not list any of the default values.
The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20
tokens to avoid running into ... |
[
0.023000548,
-0.009973626,
-0.010607328,
-0.0059985775,
-0.027522884,
-0.009195899,
-0.008965461,
0.022669295,
-0.02877589,
0.07074433,
0.047326114,
0.0008641408,
0.022957342,
-0.022798914,
-0.015396109,
0.011082606,
0.018982293,
-0.03707164,
-0.0730487,
-0.006862718,
-0.0100... | my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP
Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the
commonly adjusted parameters include: |
[
0.01502391,
0.010720946,
-0.02176279,
-0.0009230406,
-0.025000954,
-0.02005619,
0.0240966,
0.02330894,
-0.007468196,
0.021091819,
0.022740074,
0.019997846,
0.039879,
0.007818268,
0.011413796,
-0.0060095643,
0.006264825,
-0.02501554,
-0.08547584,
-0.012099353,
-0.0072749276,
... | Save a custom decoding strategy with your model
If you would like to share your fine-tuned model with a specific generation configuration, you can:
* Create a [GenerationConfig] class instance
* Specify the decoding strategy parameters
* Save your generation configuration with [GenerationConfig.save_pretrained], making... |
[
0.004798238,
0.030150177,
0.005478612,
-0.0109928455,
-0.034168303,
-0.019563418,
0.0056389095,
0.0015753681,
0.0016109898,
0.044940293,
0.019848391,
-0.02157248,
0.039782275,
-0.042318538,
0.00026872093,
0.030235669,
0.007573166,
-0.002094554,
-0.052720066,
-0.015901512,
-0.... | from transformers import AutoModelForCausalLM, GenerationConfig
model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP
generation_config = GenerationConfig(
max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id
)
generation_config.save_pretrained("my_ac... |
[
0.01528187,
-0.008238432,
0.0051665935,
-0.021580193,
-0.034443956,
-0.023913948,
0.009229574,
0.023857713,
0.022353426,
0.029973272,
0.035034426,
-0.01903556,
0.054491747,
-0.043525916,
0.0065056905,
0.0014559601,
0.020118082,
0.022395601,
-0.06343312,
-0.014578932,
-0.00380... | You can also store several generation configurations in a single directory, making use of the config_file_name
argument in [GenerationConfig.save_pretrained]. You can later instantiate them with [GenerationConfig.from_pretrained]. This is useful if you want to
store several generation configurations for a single model ... |
[
0.017591774,
0.016189747,
-0.019923571,
0.028335745,
-0.025310313,
0.0005252997,
0.025694028,
0.0027893,
0.021546973,
0.02787824,
-0.0022229538,
0.02190117,
-0.0023668464,
-0.050561596,
-0.023923043,
0.014477795,
-0.0045934897,
-0.031051254,
-0.04731479,
-0.006028725,
-0.0250... |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small")
translation_generation_config = GenerationConfig(
num_beams=4,
early_stopping=True,
decoder... |
[
0.02823244,
0.0099414075,
0.019492105,
0.025121229,
-0.022415198,
-0.050792344,
-0.028869154,
0.007503086,
-0.03878162,
0.057130534,
0.019998582,
0.018131854,
-0.002194128,
0.002800091,
-0.022820378,
-0.031459417,
0.0104406485,
-0.03875268,
-0.0545258,
-0.00594748,
-0.0009315... |
max_new_tokens: the maximum number of tokens to generate. In other words, the size of the output sequence, not
including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose
to stop generation whenever the full generation exceeds some amount of time. To learn... |
[
0.0010380306,
0.021088019,
0.0074410797,
0.011340206,
-0.006146332,
-0.025805663,
-0.056968905,
-0.0131111825,
-0.018096706,
0.05652244,
-0.0119206095,
-0.0035512552,
-0.0054468703,
-0.038425736,
0.0018295754,
0.011102091,
-0.019674214,
-0.04547988,
-0.041937925,
-0.009182292,
... |
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
streamer = TextStreamer(tok)
Despite returni... |
[
0.012186496,
0.02177301,
-0.012629509,
0.023341713,
-0.019274708,
-0.05464314,
-0.00065135653,
-0.0017566201,
0.033727106,
0.04403987,
-0.0015242198,
0.015367475,
0.04839738,
-0.02980535,
-0.015251275,
-0.0630386,
0.011656332,
-0.045143776,
-0.05339399,
-0.0034424306,
-0.0045... | Streaming
The generate() supports streaming, through its streamer input. The streamer input is compatible with any instance
from a class that has the following methods: put() and end(). Internally, put() is used to push new tokens and
end() is used to flag the end of text generation.
The API for the streamer classes i... |
[
0.028361037,
-0.014286396,
-0.017378017,
0.026074085,
-0.0076231756,
-0.025509404,
-0.0072031952,
0.014639321,
0.03337257,
0.030775042,
-0.008703126,
0.00009672345,
0.041249853,
-0.024987075,
-0.0093878,
-0.039104067,
0.018323855,
-0.05533861,
-0.055395078,
-0.013538196,
-0.0... | The API for the streamer classes is still under development and may change in the future.
In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes
ready for you to use. For example, you can use the [TextStreamer] class to stream the output of generate() into
y... |
[
0.0103725055,
-0.01407551,
-0.011695494,
0.014443764,
-0.031178892,
-0.039225936,
-0.009267741,
0.03205179,
-0.023050012,
0.060230087,
0.009976972,
-0.013598143,
0.010965805,
-0.04716387,
-0.021549715,
0.008394841,
-0.019026488,
-0.048254993,
-0.032761022,
0.0021924789,
-0.00... | from transformers import AutoModelForCausalLM, AutoTokenizer
prompt = "I look forward to"
checkpoint = "distilbert/distilgpt2"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(checkpoint)
outputs = model.generate(**inputs)... |
[
0.053126175,
0.0073070456,
-0.015199826,
0.025596624,
-0.028554587,
-0.010177148,
-0.013047249,
0.009159433,
-0.037955634,
0.05895424,
-0.014328545,
0.031805415,
0.006516303,
-0.0063625476,
-0.01751348,
-0.029594267,
-0.02306332,
0.0035876276,
-0.03575913,
0.007987962,
0.0086... | Contrastive search
The contrastive search decoding strategy was proposed in the 2022 paper A Contrastive Framework for Neural Text Generation.
It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search
works, check out this blog post.
The two main parameter... |
[
0.02019548,
0.012613536,
-0.0044337446,
0.010678309,
-0.018840821,
-0.026360562,
0.013677912,
0.014168629,
-0.003943026,
0.035552893,
0.0034661307,
-0.0031568399,
-0.0076441485,
-0.0589415,
-0.01376085,
0.033175327,
-0.0016492631,
-0.054794583,
-0.027037892,
-0.0056467173,
0.... |
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "openai-community/gpt2-large"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
prompt = "Hugging Face Company is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.gene... |
[
0.036385354,
0.019186333,
-0.010462616,
0.01037494,
-0.032732207,
-0.02136361,
0.009417816,
0.023409372,
-0.03889872,
0.05649228,
0.031972352,
0.014977907,
0.036736056,
0.019814676,
-0.023716237,
-0.03094947,
0.03308291,
-0.001912423,
-0.059151772,
0.021100583,
0.011397822,
... |
Decoding strategies
Certain combinations of the generate() parameters, and ultimately generation_config, can be used to enable specific
decoding strategies. If you are new to this concept, we recommend reading this blog post that illustrates how common decoding strategies work.
Here, we'll show some of the parameters... |
[
0.013463289,
0.011086997,
-0.021067422,
-0.01617297,
-0.013349794,
0.006802578,
-0.0020464482,
0.0077105346,
0.0066181496,
0.06979913,
0.062195,
-0.04040405,
-0.0202304,
0.0030022848,
-0.069628894,
-0.030104423,
-0.032913413,
-0.034105107,
-0.02686983,
-0.03492794,
0.02268472... | Multinomial sampling
As opposed to greedy search that always chooses a token with the highest probability as the
next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire
vocabulary given by the model. Every token with a non-z... |
[
0.008543771,
0.018986156,
-0.010688502,
-0.011722193,
-0.014865457,
-0.023430323,
-0.015765542,
0.018733008,
-0.0051297783,
0.035553336,
0.00094579183,
-0.017298497,
0.021644218,
-0.05172673,
-0.04300716,
0.006377942,
-0.020097198,
-0.041910183,
-0.036537804,
0.0005234378,
0.... |
from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
set_seed(0) # For reproducibility
checkpoint = "openai-community/gpt2-large"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
prompt = "Today was an amazing day because"
inputs = token... |
[
0.032908995,
0.00013881657,
-0.0057488536,
0.0030916056,
-0.019710355,
0.013257039,
-0.0043472284,
0.015476279,
-0.033931013,
0.050137304,
0.042486764,
-0.00304598,
-0.020133764,
-0.0071431785,
-0.041435547,
-0.019155545,
0.012030616,
-0.010621691,
-0.050049704,
-0.027360894,
... | Beam-search decoding
Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses
the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability
sequences that start with lower probability initial to... |
[
0.00947176,
0.0029348105,
-0.024899248,
0.020220773,
-0.026176501,
-0.023091003,
-0.012478326,
0.021770697,
-0.0026119095,
0.055883385,
0.010806417,
-0.00947176,
0.010411761,
-0.061882164,
-0.017092222,
0.027697723,
-0.0009866416,
-0.04675605,
-0.04001101,
0.0016279587,
0.008... |
from transformers import AutoModelForCausalLM, AutoTokenizer
prompt = "It is astonishing how one can"
checkpoint = "openai-community/gpt2-medium"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(checkpoint)
outputs = mod... |
[
0.012702832,
0.022521622,
-0.00145742,
0.0075144535,
-0.014478185,
-0.022115827,
-0.013528915,
0.02759406,
-0.0058912737,
0.003739111,
-0.003442011,
0.0027844058,
0.011383998,
-0.062260546,
0.010231831,
0.009195604,
-0.017028896,
-0.053448997,
-0.05098524,
-0.00261774,
0.0141... | from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed
set_seed(0) # For reproducibility
prompt = "translate English to German: The house is wonderful."
checkpoint = "google-t5/t5-small"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoM... |
[
0.055489317,
0.0044406136,
-0.025293147,
-0.0034754223,
-0.01958274,
-0.0060920813,
0.0055085625,
-0.003701123,
-0.032706402,
0.0819128,
0.040104978,
-0.00924088,
0.01905427,
-0.010422597,
-0.010385898,
-0.005082851,
0.010092303,
-0.0411032,
-0.05592971,
-0.02254804,
0.013241... |
This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the
[generate] method, which gives you even further control over the [generate] method's behavior.
For the complete list of the available parameters, refer to the API documentation.
Speculative Decod... |
[
0.034765057,
-0.0025173728,
-0.033555593,
0.017720053,
-0.013810392,
0.007587277,
-0.026917605,
0.00966868,
-0.017649736,
0.055438455,
0.04733786,
0.00708099,
0.006536028,
0.024990901,
-0.04376572,
-0.017452847,
-0.008740487,
-0.039602913,
-0.06834878,
-0.017720053,
0.0107304... | Beam-search multinomial sampling
As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify
the num_beams greater than 1, and set do_sample=True to use this decoding strategy.
thon |
[
0.01766606,
0.004345224,
-0.011955603,
0.001848235,
-0.041121002,
-0.022485374,
-0.019163186,
0.026648805,
-0.02167265,
0.040037367,
0.01598358,
-0.02991396,
0.020959733,
-0.054352723,
-0.02643493,
0.014144256,
-0.028616453,
-0.03849747,
-0.052071393,
-0.023312356,
0.03598800... |
from transformers import AutoModelForCausalLM, AutoTokenizer
prompt = "Alice and Bob"
checkpoint = "EleutherAI/pythia-1.4b-deduped"
assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoModelForCausalLM... |
[
0.029641269,
0.020161148,
-0.05979116,
0.026773214,
-0.008816088,
0.01674209,
-0.024540937,
0.004295017,
-0.012044415,
0.07024613,
0.032212626,
0.013464313,
0.011634693,
-0.022421684,
-0.047499496,
-0.059452076,
-0.0069652745,
-0.0035426826,
-0.07335437,
-0.029415214,
0.01442... |
Diverse beam search decoding
The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse
set of beam sequences to choose from. To learn how it works, refer to Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models.
This approach ... |
[
-0.0054968824,
0.015791731,
-0.018859407,
0.0041330513,
-0.009565709,
0.0056139985,
-0.005950234,
0.026309477,
0.023090685,
0.069816075,
0.00039573776,
0.028515786,
0.006056016,
-0.014658353,
-0.013132071,
0.009905722,
-0.008606115,
-0.027986877,
-0.07350333,
0.012293371,
0.0... |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "google/pegasus-xsum"
prompt = (
"The Permaculture Design Principles are a set of universal design principles "
"that can be applied to any location, climate and culture, and they allow us to design "
"the most efficient and sus... |
[
0.040611286,
0.01217205,
-0.007180659,
0.0029030833,
-0.034008056,
-0.010450392,
-0.0052925036,
-0.0055227666,
-0.01893115,
0.08530354,
0.035765138,
-0.01217205,
0.002998731,
-0.03633194,
-0.0029385085,
-0.025236811,
-0.029218588,
-0.059514098,
-0.03162749,
-0.0027082455,
0.0... | When using assisted decoding with sampling methods, you can use the temperature argument to control the randomness,
just like in multinomial sampling. However, in assisted decoding, reducing the temperature may help improve the latency.
thon |
[
0.025764959,
0.018201465,
-0.014008343,
-0.022040948,
-0.041426014,
-0.007119645,
-0.023022471,
0.01794165,
-0.013495931,
0.031495318,
0.0066072326,
-0.038827866,
0.024870042,
-0.041050725,
-0.0464491,
0.016815785,
-0.029676614,
-0.042753957,
-0.04183017,
-0.014513539,
0.0214... |
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
set_seed(42) # For reproducibility
prompt = "Alice and Bob"
checkpoint = "EleutherAI/pythia-1.4b-deduped"
assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, ret... |
[
0.010071601,
0.024524555,
-0.027321456,
-0.027114788,
-0.023077881,
-0.012324277,
0.029677467,
-0.039018843,
-0.028327238,
0.03303926,
0.030641915,
0.035546828,
0.010602048,
-0.03634594,
-0.050289117,
0.043014415,
-0.02092854,
-0.06927498,
-0.026095228,
0.00060191937,
-0.0023... | Glossary
This glossary defines general machine learning and 🤗 Transformers terms to help you better understand the
documentation.
A
attention mask
The attention mask is an optional argument used when batching sequences together.
This argument indicates to the model which tokens should be attended to, and which should... |
[
0.028294588,
-0.031105682,
-0.0019211501,
0.04630537,
-0.020383969,
-0.010276741,
0.032603048,
-0.013349169,
-0.009542184,
0.07571592,
0.05305765,
0.012275585,
-0.019931933,
-0.02103377,
-0.031924997,
-0.0047499025,
0.0059470898,
-0.057521496,
-0.038225237,
-0.01584949,
0.022... | Alternativelly, you can also set the prompt_lookup_num_tokens to trigger n-gram based assisted decoding, as opposed
to model based assisted decoding. You can read more about it here. |
[
0.01851131,
0.01663255,
0.04072488,
0.019284917,
-0.03304407,
-0.010961735,
-0.012709258,
-0.01060256,
-0.036414787,
0.07741596,
-0.00029398792,
0.03213232,
-0.003495046,
-0.04169189,
-0.055147126,
0.039674986,
-0.009297098,
-0.05462218,
-0.026772328,
-0.0004036398,
0.0374923... | The encoded versions have different lengths:
thon
len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first o... |
[
0.018397782,
0.008751868,
0.013529425,
0.000012912204,
-0.01120351,
-0.0010564406,
-0.006677403,
0.008891563,
-0.039868847,
0.06973556,
0.0026227667,
0.0013375761,
0.0031815453,
-0.027282361,
-0.009184922,
0.011929922,
-0.04450671,
-0.049004875,
-0.07202655,
-0.0071279183,
-0... | This argument indicates to the model which tokens should be attended to, and which should not.
For example, consider these two sequences:
thon
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
sequence_a = "This is a short sequence."
sequence_b = "This is a... |
[
0.025071193,
0.0068595014,
0.052324954,
0.008744447,
-0.031491347,
-0.03803905,
-0.019005353,
-0.021372166,
-0.03954134,
0.0760781,
-0.020351743,
0.04960383,
-0.035856485,
-0.01429299,
-0.060601708,
0.0092830025,
-0.023157902,
-0.050369147,
-0.036763526,
0.0028026165,
0.01250... | padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
thon
padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, ... |
[
0.012399886,
0.011368868,
-0.038888033,
-0.030639896,
0.008130506,
-0.0062691383,
0.008670232,
-0.0075008236,
-0.033130944,
0.03996749,
0.0009427926,
0.010047229,
0.010247896,
-0.02383103,
-0.008794785,
0.009113085,
-0.027360015,
-0.07113327,
-0.024716737,
0.0010785894,
0.007... | This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the [BertTokenizer], 1 indicates a
value that should be attended to, while 0 indicates a padded value. This attention mask... |
[
0.007803276,
0.035744905,
0.0392309,
0.02914832,
-0.016960729,
-0.032875657,
-0.018381944,
-0.015941745,
-0.027096944,
0.053094454,
0.01759089,
0.05097604,
0.009975322,
-0.0059228474,
-0.043226395,
-0.017456813,
-0.023771835,
-0.0889198,
-0.029362842,
0.00062387664,
0.0256757... | padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] |
[
0.017681805,
-0.013063219,
-0.0134116635,
-0.04123933,
-0.013193032,
-0.022669332,
0.012673783,
-0.022860635,
-0.013158871,
0.059741,
0.04020083,
0.02033271,
0.0218358,
-0.040774737,
-0.02989783,
0.010262006,
-0.03992754,
-0.022068096,
-0.06520678,
-0.012352668,
0.031018315,
... | deep learning (DL)
Machine learning algorithms which uses neural networks with several layers.
E
encoder models
Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretra... |
[
0.023688488,
-0.0028079026,
-0.018153664,
-0.06382653,
-0.044701107,
0.0025156697,
-0.005161612,
0.0020403506,
0.0018326186,
0.07041762,
0.02968807,
0.009104999,
-0.0034328592,
-0.041856237,
-0.009076832,
0.065403886,
0.009562713,
-0.015928466,
-0.016350972,
0.0016873822,
-0.... |
I
image patch
Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the patch_size - or resolution - of the model in its configuration.
inference
Inference is the process of evaluating a model on new data after traini... |
[
0.023431893,
-0.0056169475,
-0.006731255,
0.01115006,
-0.011960465,
-0.005386401,
0.008607063,
0.012973472,
-0.010674994,
0.061031908,
-0.0041777794,
0.0014627468,
-0.014782911,
-0.041386563,
-0.0038878499,
0.020581502,
-0.02843405,
-0.047115292,
-0.042644087,
-0.00989952,
-0... | Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a WordPiece tokenizer:
thon
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
sequence = "A Titan RTX has 24GB of VRAM"
The... |
[
-0.010509476,
-0.024529677,
-0.01560154,
-0.0058411197,
-0.0063480563,
-0.0010734571,
-0.031475466,
-0.0353191,
-0.030749107,
0.041008897,
0.031354405,
-0.0060454076,
-0.006015143,
-0.052237164,
-0.014587667,
-0.003976047,
-0.04642631,
-0.016358161,
-0.091823615,
-0.01902147,
... |
autoencoding models
See encoder models and masked language modeling
autoregressive models
See causal language modeling and decoder models
B
backbone
The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a head which accepts the features as its i... |
[
0.0049182097,
-0.003529606,
0.0012800298,
-0.05202407,
-0.0028991152,
0.0038553278,
-0.009516412,
-0.027657786,
-0.020069038,
0.044191506,
0.04669061,
0.028587332,
0.039193295,
-0.04562392,
-0.010819299,
0.04541058,
-0.0100269,
-0.03590179,
-0.071559764,
-0.02142526,
0.012480... |
F
feature extraction
The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from ima... |
[
-0.0036805465,
-0.022253094,
0.044831052,
-0.022696089,
0.028838947,
-0.014537605,
-0.027066968,
-0.028661748,
0.002886848,
0.03667995,
-0.0054672915,
-0.020407284,
0.043974597,
-0.04161196,
-0.034228716,
0.02096841,
-0.03266347,
-0.0029754469,
-0.05572872,
-0.018797737,
0.01... | [GPT2ForSequenceClassification] is a sequence classification head - a linear layer - on top of the base [GPT2Model].
[ViTForImageClassification] is an image classification head - a linear layer on top of the final hidden state of the CLS token - on top of the base [ViTModel].
[Wav2Vec2ForCTC] is a language modeling hea... |
[
0.013950276,
-0.015871128,
-0.010835382,
0.008773619,
-0.015974957,
0.0065153204,
0.010153072,
-0.012111005,
-0.03123794,
0.050342623,
-0.022026753,
0.039900314,
-0.021403773,
-0.055830773,
-0.044854477,
0.031682923,
-0.009878665,
-0.06318785,
-0.05283454,
-0.008588209,
0.018... | The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
thon
tokenized_sequence = tokenizer.tokenize(sequence)
The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
in "V", "RA" and "M". To indicate those tok... |
[
0.01002913,
-0.020467322,
0.009041732,
0.026673816,
-0.0050674607,
-0.014105667,
0.0053460477,
-0.00653445,
-0.018633585,
0.05963876,
-0.013372173,
0.021313664,
0.0024155956,
-0.05518137,
-0.041357815,
-0.015121276,
0.009020574,
-0.05481462,
-0.0677072,
-0.0010931892,
0.01145... | print(tokenized_sequence)
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of 🤗 Tokenizers for ... |
[
0.01849618,
0.011644153,
-0.0028752761,
-0.0050245808,
-0.01790968,
-0.005879296,
-0.013925778,
0.002689313,
-0.015563684,
0.083540365,
-0.029839931,
0.028767066,
-0.03596241,
-0.032300364,
-0.042113498,
-0.006226189,
-0.013003115,
-0.04995256,
-0.04085467,
0.0018900292,
0.03... | inputs = tokenizer(sequence)
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key input_ids:
thon
encoded_sequence = inputs["input_ids"]
print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, ... |
[
0.010966441,
0.00677192,
0.014279219,
0.030957349,
-0.017649116,
-0.027430382,
0.006561301,
0.005997272,
-0.016535336,
0.067969084,
-0.0021615168,
0.046407465,
-0.019619647,
-0.04006749,
-0.013943658,
0.011302003,
-0.005329719,
-0.048006736,
-0.046236113,
-0.023132335,
-0.004... | encoded_sequence = inputs["input_ids"]
print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.
If we decode the previous... |
[
0.0029436604,
-0.011050703,
0.0034582242,
-0.00005689036,
0.0062350947,
-0.008481433,
0.019617304,
0.00050258695,
-0.041647736,
0.054735404,
0.028347148,
-0.0044217007,
0.00063921255,
-0.01650863,
0.026161138,
0.036594365,
-0.0019588915,
-0.032137178,
-0.035543945,
-0.017402906... | print(decoded_sequence)
[CLS] A Titan RTX has 24GB of VRAM [SEP]
because this is the way a [BertModel] is going to expect its inputs.
L
labels
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
should be the expected prediction of the model: it will ... |
[
-0.033353724,
-0.014226874,
-0.021887498,
-0.01051306,
0.029117433,
0.010265943,
0.025629554,
-0.0073146606,
0.04657095,
0.029879965,
0.046655674,
-0.004631677,
0.0045081186,
-0.034935273,
-0.0012144031,
0.057133432,
0.006696868,
0.0032019292,
-0.07032242,
-0.05580606,
0.0279... | Each model's labels may be different, so be sure to always check the documentation of each model for more information
about their specific labels! |
[
0.03633952,
-0.035945512,
-0.0040612975,
-0.004493189,
0.020079177,
-0.026246892,
0.034430105,
-0.039127875,
-0.0014623702,
0.03600613,
0.025489189,
-0.007001192,
0.022882683,
-0.032217607,
-0.010554828,
0.03370271,
-0.012433935,
-0.034399796,
-0.070618086,
-0.024322324,
0.02... |
The base models ([BertModel]) do not accept labels, as these are the base transformer models, simply outputting
features.
large language models (LLM)
A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large numb... |
[
0.02212514,
0.0074415742,
-0.00045618846,
-0.005755815,
-0.033358783,
-0.01570999,
-0.014384192,
0.021098716,
-0.0155531755,
0.07247694,
-0.0019762227,
-0.000053069776,
-0.021954069,
-0.04265362,
-0.05226209,
0.0035425886,
-0.02558932,
-0.054457497,
-0.03190468,
-0.016322993,
... | These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
help of special tokens, such as the classifier ([CLS]) and separator ([SEP]) tokens. For example, the BERT model
builds its two sequence input as such:
thon
[CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
We ... |
[
0.008861786,
-0.012256987,
-0.017303066,
0.016913708,
-0.011898777,
-0.011883204,
0.02814279,
-0.017505532,
-0.0034905938,
0.043576937,
-0.021695023,
0.042050656,
-0.028329682,
-0.043203153,
-0.027395222,
0.0005699226,
-0.014032459,
-0.031475693,
-0.03731606,
-0.01789489,
-0.... | which will return:
thon
print(decoded)
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
such as BERT, also deploy token type IDs (also called segment IDs). They are represente... |
[
0.015203521,
-0.0044553117,
-0.011831533,
0.028247787,
-0.017688142,
-0.014922522,
0.0073503405,
0.012992503,
0.0018338878,
0.013842895,
-0.008001074,
-0.033571977,
-0.027996367,
-0.039458163,
-0.041085,
-0.025171587,
-0.018723402,
-0.03833417,
-0.016919093,
-0.0042001945,
-0... | We can use our tokenizer to automatically generate such a sentence by passing the two sequences to tokenizer as two
arguments (and not a list, like before) like this:
thon
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
sequence_a = "HuggingFace is based ... |
[
0.027903587,
-0.015694847,
0.0037177815,
0.0035780428,
0.013745863,
0.013760572,
-0.0009717347,
-0.024917593,
-0.032125164,
0.058837295,
0.015444789,
-0.010392137,
0.017857118,
-0.029242136,
0.0059646307,
0.024461605,
-0.04277471,
-0.061779156,
-0.051629726,
-0.04889379,
0.01... |
For sequence classification models, ([BertForSequenceClassification]), the model expects a tensor of dimension
(batch_size) with each value of the batch corresponding to the expected label of the entire sequence.
For token classification models, ([BertForTokenClassification]), the model expects a tensor of dimensio... |
[
0.047927808,
0.038012672,
-0.012128584,
-0.0064064343,
-0.020011814,
0.029968845,
0.0139998915,
0.00021525707,
-0.042146306,
0.096414216,
0.009202921,
0.018685142,
0.0137205925,
-0.07736599,
-0.03348802,
0.03482866,
-0.021897087,
-0.051000103,
-0.033739388,
-0.020235255,
0.01... | import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")]
)
The above model accepts inputs having a dimension of (10, ). We can use the model for running a forward pass like so:
Generate random inputs for ... |
[
0.008658829,
0.02421846,
-0.03551068,
0.05179249,
0.0035780587,
-0.029164279,
-0.009381006,
-0.030200128,
-0.04817431,
0.07271374,
0.0024200224,
0.023547346,
0.003678361,
-0.04814513,
-0.00066199555,
0.029237226,
-0.0004326679,
-0.03168825,
-0.06652782,
0.044264343,
0.0099354... |
XLA Integration for TensorFlow Models
[[open-in-colab]]
Accelerated Linear Algebra, dubbed XLA, is a compiler for accelerating the runtime of TensorFlow Models. From the official documentation:
XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with ... |
[
0.00075667864,
-0.0014173255,
0.02775317,
0.040635142,
-0.021072106,
-0.04148571,
0.03569637,
0.0032445006,
-0.017313149,
0.061131056,
-0.00917789,
0.021826642,
0.012971143,
-0.008533105,
-0.05284489,
-0.013718819,
-0.02090748,
-0.042144213,
-0.030592967,
0.011462073,
0.00205... | encoded_dict["token_type_ids"]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.