vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.0046281894,
0.03742303,
-0.040887073,
0.009795861,
-0.022828616,
0.013849076,
0.01960592,
-0.00097071304,
-0.010548297,
0.032340538,
-0.035691008,
0.028195044,
0.0086175185,
-0.064283565,
-0.009859747,
0.048383035,
-0.018285608,
-0.05812211,
-0.076208964,
0.006495082,
0.006... | from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_... |
[
0.0099665355,
0.053460028,
-0.049354117,
0.017436236,
-0.020071784,
0.013156937,
-0.000004629414,
-0.003998405,
0.018615298,
0.028241986,
0.0049451217,
0.03706414,
0.0011417822,
-0.049354117,
-0.015841035,
0.050879963,
0.014037766,
-0.03240338,
-0.0714095,
0.012990481,
-0.000... | Then bundle your callbacks together:
callbacks = [metric_callback, push_to_hub_callback]
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_validation_set... |
[
0.031172011,
-0.03313931,
-0.03347181,
-0.0017906589,
-0.03765579,
0.008361026,
-0.029620338,
0.007938473,
-0.016347988,
0.083901204,
0.012157084,
-0.002493761,
0.01881404,
-0.03549453,
-0.034219943,
0.015225796,
0.0011421079,
-0.027653039,
-0.027486786,
-0.0002422325,
0.0187... | Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with some text and two candidate answers:
prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
candidate1 = "The law does not apply to croissants and brioche."
candida... |
[
0.012172356,
-0.0032077178,
-0.0064018583,
0.043394145,
-0.014595965,
-0.015682176,
0.026191272,
0.020692326,
0.008037964,
0.026435668,
-0.0016793846,
0.014066437,
0.040569995,
-0.02919193,
-0.039918266,
0.013543698,
-0.0136591075,
-0.06615027,
-0.057569202,
-0.036768254,
-0.... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
tf_train_set = model.prepare_tf_dataset(
tokenized_swag["train"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_co... |
[
0.0642047,
-0.0074227606,
-0.029865352,
0.02621934,
-0.026059555,
-0.0019192412,
-0.0017540085,
0.046105366,
0.00015831015,
0.031550363,
-0.0017994021,
0.008316106,
-0.010567629,
-0.038958598,
-0.010298898,
0.010785518,
-0.0064822054,
-0.02693111,
-0.025725458,
-0.0263646,
0.... | Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some labels:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
la... |
[
0.05903433,
0.018568443,
-0.038202167,
0.03547978,
-0.03257985,
-0.02963553,
0.007493957,
0.053204875,
-0.026247345,
0.04938762,
0.024930539,
0.0016968672,
0.008026598,
-0.038675625,
-0.017917437,
-0.0052931155,
-0.006461966,
-0.029990623,
-0.0065729325,
-0.03471041,
0.013574... | Get the class with the highest probability:
predicted_class = logits.argmax().item()
predicted_class
'0'
Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt,... |
[
0.045441337,
0.007559239,
-0.038282964,
0.0162495,
-0.055835288,
0.012698949,
-0.020272505,
0.009384624,
-0.0067324475,
0.060244847,
0.05322964,
0.010494171,
0.007265746,
-0.036421787,
-0.026471654,
0.027087273,
-0.0033053774,
-0.031239128,
-0.044610966,
-0.00063709496,
0.022... | model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding
PyTorch notebook
... |
[
0.04091781,
0.026920356,
-0.032604173,
0.014464037,
-0.030709567,
-0.021660708,
0.026651718,
0.06045769,
-0.010703104,
0.04408491,
0.032547615,
0.009791149,
0.03042679,
-0.051776443,
-0.011035367,
0.018719828,
0.0052172327,
-0.026284108,
-0.023102868,
-0.02051546,
0.01466198,... | Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
outputs = model(inputs)
logits = outputs.logits
Get the class with the ... |
[
0.04432681,
0.01450242,
-0.012849073,
-0.017787734,
0.0008698779,
-0.016462207,
0.04352864,
0.0063960305,
0.0076752366,
0.008950879,
-0.032382797,
0.010126751,
0.024871472,
-0.054218385,
-0.013226777,
0.03608858,
-0.0024871472,
-0.052279975,
-0.023745485,
0.010796642,
0.02254... | Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import notebook_login
... |
[
0.030048234,
0.0100501,
0.01317332,
0.026509946,
-0.005719099,
-0.0068792496,
0.04202398,
-0.0021314786,
0.0056204353,
0.05405416,
-0.007110599,
0.030429281,
0.054897908,
-0.046623755,
-0.018576015,
0.015309902,
0.0053958897,
-0.039275,
-0.048773944,
0.0037798448,
-0.01307805... | from huggingface_hub import notebook_login
notebook_login()
Load SQuAD dataset
Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_d... |
[
0.047200598,
0.013721766,
-0.028212178,
0.019700129,
-0.045805644,
-0.022561202,
0.010896278,
0.056111205,
-0.015287528,
0.045293216,
0.026290562,
0.011188079,
0.010590242,
-0.036524948,
-0.024468584,
0.011145377,
-0.00018181962,
-0.028326051,
-0.013671946,
-0.034275945,
0.02... | Pass your inputs and labels to the model and return the logits:
from transformers import AutoModelForMultipleChoice
model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
logits = outputs.logits
Get the class with th... |
[
0.033708032,
0.007256772,
0.035250913,
0.018083092,
-0.017193973,
-0.023221672,
0.010747868,
0.0018599564,
0.0060473103,
0.073378384,
0.020567391,
0.014892727,
0.053059425,
-0.01482735,
-0.004661332,
0.029000938,
0.010937459,
-0.034754053,
-0.030753024,
-0.0059002135,
-0.0314... | from datasets import load_dataset
squad = load_dataset("squad", split="train[:5000]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
squad = squad.train_test_split(test_size=0.2)
Then take a look at an example: |
[
0.008166778,
-0.028506259,
0.0018959908,
0.026381569,
-0.010402129,
0.01960912,
0.018856624,
0.047480922,
-0.010520167,
0.0039432184,
0.026957005,
0.0096127475,
0.04633005,
-0.016451593,
-0.062973455,
-0.021999396,
0.0029564917,
-0.014400678,
-0.04193312,
-0.02980468,
0.00556... |
squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Ch... |
[
0.009977152,
-0.020923914,
-0.019588944,
-0.013237291,
-0.031308584,
-0.0055260775,
-0.00975934,
0.0072931573,
-0.037856966,
0.060481213,
0.01637096,
0.011080259,
0.008965384,
-0.01811345,
-0.015626187,
0.02627785,
-0.038503375,
-0.034231465,
-0.022750717,
0.012450361,
-0.030... | from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
There are a few preprocessing steps particular to question answering tasks you should be aware of: |
[
0.0026606724,
-0.011447539,
0.0014498059,
0.034378547,
0.00044015175,
-0.019402608,
-0.006147752,
0.005332124,
-0.01697369,
0.068699606,
0.005181215,
0.0030756725,
-0.0057992237,
-0.023297502,
-0.038489025,
0.003063097,
-0.01039836,
-0.04303067,
-0.030814216,
0.0067873197,
0.... |
Some examples in a dataset may have a very long context that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the context by setting truncation="only_second".
Next, map the start and end positions of the answer to the original context by setting
return_offset_mapping=True... |
[
0.031237144,
-0.008379827,
-0.002153794,
0.006964172,
-0.03554473,
-0.033291094,
-0.011760286,
0.025346303,
-0.038283333,
0.067552105,
0.03380458,
0.002645886,
-0.0015048031,
0.00798758,
-0.033490784,
0.031237144,
-0.029297303,
-0.06372948,
-0.017201824,
0.0012846098,
-0.0128... | Here is how you can create a function to truncate and map the start and end tokens of the answer to the context:
def preprocess_function(examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=384,
tru... |
[
0.022110663,
-0.04629702,
-0.040761832,
-0.01899712,
-0.0054900623,
0.008754018,
-0.013973337,
0.014477219,
-0.019312987,
0.06792636,
0.0014872053,
0.011423842,
0.024848172,
-0.00498242,
0.0050989897,
0.031376082,
-0.02208058,
-0.03143625,
-0.02693891,
0.0008098782,
-0.014379... | There are several important fields here:
answers: the starting location of the answer token and the answer text.
context: background information from which the model needs to extract the answer.
question: the question a model should answer.
Preprocess
The next step is to load a DistilBERT tokenizer to process the qu... |
[
-0.006163337,
0.010243494,
0.0035827088,
-0.0050355466,
-0.022642002,
-0.03626168,
-0.012728943,
0.018389443,
0.00308526,
0.081028484,
0.0043782676,
0.0089433035,
0.0038143727,
-0.023245405,
-0.035543345,
-0.019869218,
-0.024940683,
-0.025975088,
-0.034394003,
-0.018877912,
0... |
offset_mapping = inputs.pop("offset_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
answer = answers[i]
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answ... |
[
0.027384132,
-0.006863269,
0.01764939,
0.03345111,
0.009789896,
-0.02936969,
-0.013237043,
-0.0073217396,
-0.009238353,
0.03659491,
0.018076837,
0.008059429,
0.019676313,
-0.026391353,
-0.032237716,
0.019235078,
-0.05217601,
-0.03229287,
-0.06739861,
-0.0055154343,
-0.0116996... | To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] function. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once. Remove any columns you don't need:
tokenized_squad = squad.map(preprocess_function, batched=True,... |
[
0.011739576,
-0.00528754,
0.0030513438,
-0.035284232,
-0.048268367,
0.02084448,
-0.009381472,
0.018821169,
-0.037467662,
0.0605538,
0.028020687,
0.00693603,
0.023595603,
-0.020276789,
0.0036044794,
0.0457647,
-0.008093248,
-0.0060080723,
-0.055924926,
0.009825436,
-0.01049502... | Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load DistilBERT with [AutoModelForQuestionAnswering]:
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
model = AutoModelFor... |
[
0.02835603,
0.03573751,
-0.0047213626,
0.035096854,
-0.039107922,
-0.023007939,
0.027840719,
0.020013563,
-0.01612784,
0.019191852,
0.016086059,
0.03960931,
0.00770181,
-0.039107922,
-0.026615115,
0.028369958,
-0.021364514,
-0.015988568,
-0.05473438,
-0.00062063633,
-0.003081... | training_args = TrainingArguments(
output_dir="my_awesome_qa_model",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
... |
[
0.0235925,
-0.000009503632,
0.031536456,
0.01098091,
-0.029292399,
-0.01708474,
-0.013793459,
0.0013258626,
-0.024894051,
0.052002236,
-0.0044544493,
0.022829521,
0.029546726,
-0.024475161,
-0.042846493,
0.029367201,
-0.045030706,
-0.04676611,
-0.06833896,
0.010928549,
-0.023... | tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
Now create a batch of examples using [DefaultDataCollator]. Unlike other data collators in 🤗 Transformers, the [DefaultDataCollator] does not apply any additional preprocessing such as padding.
from transformer... |
[
0.04548833,
0.014513423,
-0.025216982,
0.00045309518,
-0.021709032,
-0.013880841,
0.018776156,
-0.00011647489,
0.0074975234,
0.014491858,
0.017036559,
0.02153651,
0.0054703886,
-0.03510825,
-0.0021008162,
0.03283671,
0.013816146,
-0.028940585,
-0.07562219,
0.0143624665,
0.001... | At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model).
Pass th... |
[
0.034555666,
0.027591713,
-0.040115707,
-0.012440596,
-0.0137263555,
-0.012732497,
0.01823694,
0.015165016,
-0.032192647,
0.015609819,
-0.016332626,
0.0077145593,
0.025284294,
-0.05465522,
-0.0025402445,
0.046064954,
-0.011363337,
-0.031386442,
-0.09596634,
0.027480511,
-0.01... | Configure the model for training with compile:
import tensorflow as tf
model.compile(optimizer=optimizer)
The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]:... |
[
0.024683893,
0.032738637,
-0.04027372,
-0.0053229155,
-0.032218978,
-0.03484615,
0.0002438166,
0.0012747932,
-0.021421578,
0.02547782,
0.011425323,
0.01987703,
0.017076634,
-0.053034283,
0.0066328943,
0.042121403,
0.0033074252,
-0.036809314,
-0.07350315,
0.009476594,
0.014225... | Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up a... |
[
0.029642325,
0.011130068,
-0.020783698,
-0.024574168,
-0.011804403,
-0.007800985,
-0.009540059,
0.012287084,
-0.047331184,
0.05985251,
0.03756398,
-0.013309234,
0.03523575,
-0.032623593,
0.010420242,
0.0325952,
-0.022075582,
-0.041936506,
-0.07342438,
0.008737955,
0.008212685... | To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_epochs = 2
total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
optimizer, schedule = create_o... |
[
0.003171243,
-0.021158,
-0.00067347236,
-0.002876407,
-0.03341476,
-0.001478568,
-0.0020006734,
0.014797964,
-0.044450052,
0.05879874,
0.0048753256,
0.000059943424,
0.03546457,
-0.011470528,
-0.0257069,
0.026394852,
-0.011189732,
-0.023362251,
-0.017521689,
-0.020905284,
-0.0... | Then you can load DistilBERT with [TFAutoModelForQuestionAnswering]:
from transformers import TFAutoModelForQuestionAnswering
model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_... |
[
0.028647812,
0.055985685,
-0.025145147,
0.010052363,
-0.013654698,
-0.004453083,
0.013284497,
-0.00029678323,
-0.005478253,
0.037760437,
-0.024447462,
0.031979617,
0.015178215,
-0.05883338,
-0.008180004,
0.061851937,
0.0034795273,
-0.044196226,
-0.051286988,
0.0135407895,
-0.... | from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_qa_model",
tokenizer=tokenizer,
)
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune the m... |
[
0.014589414,
0.007976897,
-0.018385364,
0.030448647,
-0.015805198,
-0.010732675,
0.01873659,
0.016480634,
-0.005474407,
0.024558846,
-0.0004943346,
0.021370789,
0.050792776,
-0.04074229,
-0.03420407,
0.014440818,
-0.016683266,
-0.053764693,
-0.061329573,
-0.039013173,
-0.0151... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
tokenized_squad["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_sq... |
[
0.034491315,
0.0014743559,
-0.06200559,
-0.024180492,
-0.03913329,
-0.008510295,
-0.016303195,
0.034688246,
-0.034519445,
0.05390323,
0.037304636,
0.02423676,
-0.023378696,
-0.03471638,
0.020396575,
0.0043465807,
0.008482163,
-0.04619473,
-0.033534784,
0.0037417167,
0.0080601... |
Evaluate
Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance.
If have mo... |
[
0.033195924,
-0.008306254,
-0.056354515,
-0.0060442183,
-0.04084757,
0.013732231,
0.0074152593,
0.025573371,
-0.007964403,
0.046142623,
0.02714443,
-0.008269887,
0.0069315764,
-0.030752052,
-0.013535849,
0.033457767,
-0.018896364,
-0.04078938,
-0.022664,
-0.019347318,
-0.0044... | The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for question answering with your model, and pass your text to it:
from transformers import pipeline
question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
question_answerer(quest... |
[
0.028133895,
0.016562225,
-0.044593975,
0.02455879,
-0.015394844,
0.010666949,
-0.017131325,
0.006599354,
0.0026174884,
0.027739905,
0.02790042,
0.03335793,
-0.0065774657,
-0.04301801,
-0.026018016,
0.034904707,
0.005034333,
-0.028017158,
-0.05425406,
0.0046403417,
0.01413261... | model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding
PyTorch noteb... |
[
0.03345569,
0.021465706,
-0.030665293,
0.027075564,
0.003869495,
-0.041071147,
-0.016393578,
0.043076742,
-0.028688762,
0.06400472,
0.00242343,
0.03354289,
-0.017963177,
-0.048802868,
-0.028339963,
-0.017338244,
-0.004356361,
-0.018166643,
0.009308588,
0.007208524,
0.00784072... | answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
Decode the predicted tokens to get the answer:
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text ... |
[
0.030655468,
-0.036564957,
-0.048838507,
-0.014354657,
-0.045741707,
-0.008892642,
-0.01764323,
0.019560972,
0.024561308,
0.06795911,
0.004744637,
-0.003091472,
0.006850602,
0.0030364254,
0.018566588,
0.009595815,
-0.010945338,
-0.037133176,
-0.040599316,
-0.025228968,
-0.010... | question = "How many programming languages does BLOOM support?"
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for ques... |
[
0.05708982,
-0.016363865,
-0.03866281,
0.03781494,
-0.021465208,
-0.00048619966,
0.011432095,
0.036627926,
0.015515996,
0.032586418,
0.007362326,
0.015869275,
0.005320376,
-0.061216116,
-0.015925799,
0.018610716,
0.0039637857,
-0.01179244,
-0.033914745,
0.0037341549,
-0.00301... | You can also manually replicate the results of the pipeline if you'd like:
Tokenize the text and return PyTorch tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
inputs = tokenizer(question, context, return_tensors="pt")
Pass your inputs to the model and... |
[
0.04426947,
-0.006954152,
-0.043083936,
0.0077854698,
-0.019792587,
0.006954152,
-0.009693886,
0.0447032,
-0.010286652,
0.050659772,
0.03131537,
0.01776851,
-0.009715572,
-0.057107903,
-0.033108126,
0.016120331,
0.0011764951,
-0.0005706272,
-0.008443295,
-0.006169822,
0.01090... | Pass your inputs to the model and return the logits:
import torch
from transformers import AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
with torch.no_grad():
outputs = model(**inputs)
Get the highest probability from the model output for the start and... |
[
0.04349841,
0.012691806,
-0.01722766,
-0.0161402,
0.0016124103,
-0.016927179,
0.04332671,
0.004575203,
0.0108173685,
0.009500969,
-0.030363036,
0.0098586865,
0.025054513,
-0.053829286,
-0.01352171,
0.035743102,
-0.0021248402,
-0.053943753,
-0.025326379,
0.009593976,
0.0212913... | Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_logi... |
[
0.055187117,
0.009769457,
-0.03298704,
0.021275075,
-0.019837761,
-0.026355477,
0.012003696,
0.04346092,
-0.028461639,
0.032730885,
-0.0034794353,
0.016308518,
-0.0023427487,
-0.05641097,
-0.019652762,
0.0063006952,
-0.002180873,
-0.021360459,
-0.016906213,
-0.009420802,
-0.0... | Tokenize the text and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
inputs = tokenizer(question, text, return_tensors="tf")
Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForQuestionAnswerin... |
[
0.044481575,
0.0139596425,
-0.008566461,
0.013389009,
-0.009672933,
-0.018663889,
0.059958268,
0.013973561,
-0.016130833,
0.044732098,
0.017954078,
0.0021120396,
0.0155602,
-0.028559508,
-0.033402935,
0.01661796,
0.012929719,
-0.027835779,
-0.04523314,
0.012205989,
0.00079810... | from huggingface_hub import notebook_login
notebook_login()
Load ELI5 dataset
Start by loading the first 5000 examples from the ELI5-Category dataset with the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datase... |
[
0.04649359,
0.011356346,
0.010294818,
0.012838479,
-0.026544875,
-0.015315378,
0.03426265,
0.015702602,
-0.016330171,
0.058110308,
0.03786784,
0.0038956073,
0.016049769,
-0.019387906,
-0.03153873,
0.020616341,
0.025690312,
-0.0149949165,
-0.03065746,
-0.004887034,
-0.01498156... | from datasets import load_dataset
eli5 = load_dataset("eli5_category", split="train[:5000]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
eli5 = eli5.train_test_split(test_size=0.2)
Then take a look at an example: |
[
0.00056012004,
-0.03973021,
-0.05461809,
-0.042445056,
0.020580307,
-0.028564299,
0.027995056,
-0.041861217,
-0.017267022,
0.050005767,
0.03123536,
-0.0048312633,
0.020726265,
-0.034680005,
0.004601377,
0.04732011,
-0.024856923,
-0.024229297,
-0.038066268,
-0.018040609,
-0.00... | While this may look like a lot, you're only really interested in the text field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word is the label.
Preprocess
For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to pro... |
[
0.03425167,
0.01050155,
-0.027110916,
0.027919302,
0.004513496,
-0.054521233,
-0.004109302,
0.042215783,
-0.031227702,
0.060868572,
0.025613902,
0.011616825,
-0.0150824115,
-0.041467275,
-0.03625767,
-0.008869805,
-0.003678911,
-0.027964214,
0.0049102046,
-0.0012453281,
0.008... | Get the highest probability from the model output for the start and end positions:
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
Decode the predicted tokens to get the answer:
predict_answer_tokens = inputs.input_ids[... |
[
-0.000539634,
-0.019509211,
-0.014778375,
-0.007337924,
-0.010721278,
-0.0029732508,
0.0018326503,
-0.0051555876,
-0.004371997,
0.07252974,
0.0023288024,
0.013833672,
0.007674795,
-0.021296091,
-0.0174294,
0.008121515,
-0.013028111,
-0.027857749,
-0.011585426,
0.011665981,
-0... | from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilroberta-base")
You'll notice from the example above, the text field is actually nested inside answers. This means you'll need to extract the text subfield from its nested structure with the flatten method: |
[
0.038326707,
0.0028962956,
0.010122975,
0.016632611,
-0.0038628993,
-0.044766046,
-0.030903194,
0.01351136,
-0.021131711,
0.054607827,
0.017982341,
-0.004688906,
-0.006840038,
-0.010376049,
-0.035852205,
0.025152782,
-0.056407467,
-0.049799412,
-0.046846878,
-0.00036752902,
-... | def preprocess_function(examples):
return tokenizer([" ".join(x) for x in examples["answers.text"]])
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the datas... |
[
0.019911861,
0.015596468,
-0.051645957,
0.05122968,
-0.0072640134,
-0.012488275,
0.02035589,
0.011204758,
0.014375392,
0.075429186,
0.029250316,
0.041044798,
0.009858799,
-0.045818094,
-0.014708413,
0.0025236723,
-0.013098814,
0.0021282102,
-0.0019790446,
0.062441375,
-0.0128... |
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'text': ["The tax bil... |
[
0.039581873,
0.00020836336,
-0.0032921194,
0.023126487,
-0.03224366,
-0.05133969,
-0.006879574,
0.030853847,
-0.02287632,
0.043417756,
0.025155615,
0.014718119,
-0.003478007,
-0.0319379,
-0.052145783,
0.0145513415,
-0.019443484,
-0.044029273,
-0.055397943,
0.010409699,
0.0009... | tokenized_eli5 = eli5.map(
preprocess_function,
batched=True,
num_proc=4,
remove_columns=eli5["train"].column_names,
) |
[
0.033411488,
0.016532926,
-0.049829204,
0.04824504,
-0.014567121,
-0.018851569,
0.016158488,
0.018808365,
-0.0035535712,
0.07137385,
0.035773337,
0.049685188,
0.0042160405,
-0.048216235,
-0.021991096,
-0.0003602626,
-0.012385293,
0.0030279162,
-0.0038308,
0.056252275,
-0.0213... |
eli5 = eli5.flatten()
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'ans... |
[
0.033844043,
-0.031470563,
-0.01610156,
0.016614348,
-0.024071757,
-0.02197665,
0.006025265,
-0.0028148426,
0.006105846,
0.05511744,
0.0055747437,
0.021346653,
0.00925217,
-0.0064941,
-0.05142536,
0.013273897,
-0.0346352,
-0.060186718,
-0.025932448,
0.021507815,
-0.019354103,... | Each subfield is now a separate column as indicated by the answers prefix, and the text field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the... |
[
0.03450812,
-0.015914638,
0.028790371,
0.030245014,
-0.01785896,
-0.024239218,
0.009736014,
0.0137254745,
0.020235354,
0.085262135,
0.026053919,
0.046202857,
0.008713444,
-0.012450864,
-0.023792744,
0.007525247,
-0.019457625,
-0.055333972,
-0.010441731,
0.007993325,
-0.008735... |
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead ... |
[
0.048774447,
-0.015618294,
0.016112728,
0.056976233,
0.0041808747,
-0.018773945,
0.010688497,
0.0036482676,
-0.014200431,
0.04740748,
0.001066123,
0.029898707,
-0.013189751,
-0.018962992,
-0.09039415,
0.0026630354,
-0.023529235,
-0.07329255,
-0.01638903,
0.012804383,
0.001416... | Apply the group_texts function over the entire dataset:
lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
Now create a batch of examples using [DataCollatorForLanguageModeling]. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padd... |
[
0.027809754,
0.0035135788,
0.051776808,
0.045486167,
-0.0053121755,
-0.019498138,
0.019996267,
0.028706383,
-0.004504497,
0.04967044,
0.02759627,
0.016409747,
0.0024692898,
-0.069567084,
-0.023468927,
0.037516125,
-0.004696632,
-0.07594311,
-0.049243473,
-0.010268545,
0.00974... | This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
- concatenate all the sequences
- split the concatenated sequences into shorter chunks defined by block_size, which should be both shorter than the maxi... |
[
0.0076699173,
0.0025260865,
0.026118781,
-0.00812454,
-0.0033693374,
-0.039830774,
0.008901797,
-0.018976813,
-0.012839412,
0.042030558,
0.011930168,
0.02541485,
-0.0061374004,
-0.02471092,
-0.048688576,
0.032439496,
-0.023655022,
-0.08007218,
-0.01695301,
0.006855997,
-0.014... | Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data:
from transformers import DataCollatorForLanguageModeling
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probabi... |
[
0.044208445,
0.014822156,
-0.020842763,
-0.0029959679,
-0.023193665,
-0.014807822,
0.018893233,
0.00245483,
0.011130952,
0.017889798,
0.021416152,
0.025773924,
0.008636702,
-0.03360071,
-0.003377631,
0.03489084,
0.019896667,
-0.023164995,
-0.07305001,
0.012356576,
0.005339703... | At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model).
Pass th... |
[
0.0325835,
0.05389234,
-0.008800578,
0.018042218,
-0.031260315,
-0.03321753,
0.019889168,
0.02711157,
-0.021088308,
0.016911997,
0.008669637,
0.04468516,
-0.011054132,
-0.03729736,
-0.021474237,
0.02999226,
0.0045450125,
-0.013748749,
-0.045649983,
-0.005282414,
-0.0030753782... | training_args = TrainingArguments(
output_dir="my_awesome_eli5_mlm_model",
evaluation_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_dataset["train"],
... |
[
0.0151936095,
-0.0003727071,
0.014836952,
-0.0049718055,
-0.0042834566,
-0.034610044,
0.006822858,
-0.014851218,
-0.033639938,
0.03786276,
0.017262224,
0.029245915,
-0.011234712,
-0.03335461,
-0.04784917,
0.029474176,
-0.013674249,
-0.07738041,
-0.02518002,
0.0030529883,
-0.0... | Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data:
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf")
Train
If... |
[
0.01748567,
0.023780514,
-0.012014113,
-0.008815693,
-0.012465826,
0.00007758128,
-0.0047684885,
-0.004728417,
-0.053243868,
0.04112047,
0.014760821,
-0.0013833717,
0.02625765,
-0.054759294,
0.017951956,
0.044617604,
-0.0018614954,
-0.01977338,
-0.06912669,
0.017718814,
0.034... | trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer, AdamWeightDecay
... |
[
0.0126020415,
0.005559724,
-0.0060177892,
0.011231341,
0.008601837,
0.015021745,
0.006000306,
0.00047554873,
-0.02247667,
0.043750484,
0.014126594,
-0.0056611276,
0.033428278,
-0.04551281,
0.0147000505,
0.0447855,
-0.0059863194,
-0.023050126,
-0.04649188,
-0.01162297,
0.00885... | from transformers import create_optimizer, AdamWeightDecay
optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
Then you can load DistilRoBERTa with [TFAutoModelForMaskedLM]:
from transformers import TFAutoModelForMaskedLM
model = TFAutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-bas... |
[
0.005754714,
0.00020803977,
-0.001908009,
-0.028809672,
-0.0059713284,
0.02678794,
0.006863056,
-0.0042348052,
-0.010772941,
0.06429106,
0.043034,
0.0049929544,
0.019885173,
-0.034513842,
0.02460736,
0.056348544,
0.006076025,
-0.023582052,
-0.059265614,
0.0050290567,
0.013069... | Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load DistilRoBERTa with [AutoModelForMaskedLM]:
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("distilbert/distil... |
[
0.018789558,
-0.0022642226,
-0.049596205,
-0.02008235,
-0.053327676,
-0.024210472,
0.0043264464,
0.008895291,
-0.020714056,
0.027692195,
0.012200725,
0.018583886,
0.0060195634,
-0.0534452,
-0.0076098447,
0.036080655,
0.0052336045,
-0.04698124,
-0.06869427,
0.013096865,
0.0177... | Once training is completed, use the [~transformers.Trainer.evaluate] method to evaluate your model and get its perplexity:
import math
eval_results = trainer.evaluate()
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 8.76
Then share your model to the Hub with the [~transformers.Trainer.pus... |
[
0.044244338,
0.041218128,
-0.044524543,
0.00061032106,
-0.013519882,
-0.023691317,
0.01079489,
0.01861961,
-0.035894234,
0.030037954,
0.008574267,
0.0024412842,
-0.0021663334,
-0.059683625,
-0.01871768,
0.045617342,
0.024686044,
-0.05141758,
-0.06287796,
0.01402425,
-0.001150... | import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_mlm_... |
[
0.011044914,
0.007794459,
-0.02086925,
0.02235517,
-0.021174394,
-0.014076461,
0.013811118,
0.015588918,
0.0067032347,
0.027781442,
0.011469463,
0.019409861,
0.041207813,
-0.05174194,
-0.037413403,
0.015920596,
-0.022023492,
-0.05081324,
-0.043808177,
-0.039960697,
-0.0114959... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
lm_dataset["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_test_set = model.prepare_tf_dataset(
lm_dataset["test"],
... |
[
0.03730048,
0.031862535,
-0.041809328,
-0.020043883,
-0.024443427,
-0.019306071,
0.016368488,
0.020822685,
-0.05372362,
0.025823409,
0.016067898,
-0.0246757,
0.024798669,
-0.04719262,
-0.0047377073,
0.035251003,
-0.0036719793,
-0.037628394,
-0.090832815,
0.021246243,
-0.00198... | Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and ... |
[
0.03560835,
0.059741296,
-0.027218042,
0.0054709995,
-0.0091327485,
-0.0069775064,
0.006152171,
0.011287991,
-0.011187077,
0.040163912,
-0.016117463,
0.027001798,
0.0041879457,
-0.05212947,
-0.018842151,
0.06879476,
0.026511641,
-0.046305273,
-0.03076446,
0.005070946,
-0.0014... | from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_mlm_model",
tokenizer=tokenizer,
)
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune... |
[
0.038203824,
0.025781102,
-0.008082685,
0.00010790484,
-0.030833682,
-0.032273162,
0.014092518,
-0.0013189244,
-0.0071254307,
0.06500696,
0.0134735415,
0.015330472,
0.008399371,
-0.06420085,
-0.043990538,
0.03765682,
0.0045127724,
-0.047301345,
-0.021074,
-0.021966478,
-0.017... |
from transformers import pipeline
mask_filler = pipeline("fill-mask", "username/my_awesome_eli5_mlm_model")
mask_filler(text, top_k=3)
[{'score': 0.5150994658470154,
'token': 21300,
'token_str': ' spiral',
'sequence': 'The Milky Way is a spiral galaxy.'},
{'score': 0.07087188959121704,
'token': 2232,
'toke... |
[
0.021842588,
0.015726084,
-0.05133805,
0.016363824,
-0.006656409,
-0.015204296,
-0.009740026,
-0.017885702,
-0.007616642,
0.025393639,
0.03646712,
0.04696084,
0.00816017,
-0.057309616,
-0.02291515,
0.055686276,
0.0010798094,
-0.024118159,
-0.047163755,
-0.00493886,
0.00667090... | model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding
PyTorch noteb... |
[
0.025617955,
-0.027635906,
-0.018089997,
-0.020308312,
-0.034920566,
-0.02338533,
-0.007077139,
-0.015986176,
-0.031743366,
0.06606284,
0.051464897,
0.024172472,
0.00038798142,
-0.030741546,
-0.021739483,
0.060910624,
-0.012572834,
-0.045139123,
-0.024015045,
-0.016987996,
-0... | For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding
PyTorch notebook
or TensorFlow notebook.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with some text you'd like the model to fill in the blank with, and use... |
[
0.051272817,
-0.009910835,
-0.03245369,
0.019076925,
-0.010168632,
-0.005291985,
0.01065558,
0.017802266,
-0.0024705478,
0.0393569,
0.015739895,
0.02799954,
-0.0142074395,
-0.045572653,
-0.038984526,
0.031136062,
0.027440982,
-0.022800649,
-0.009559945,
-0.0005115645,
-0.0113... | Tokenize the text and return the input_ids as PyTorch tensors. You'll also need to specify the position of the <mask> token:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model")
inputs = tokenizer(text, return_tensors="pt")
mask_token_index = torch.wher... |
[
0.051819492,
0.0036459612,
-0.032775197,
0.00781627,
-0.029950716,
-0.02933548,
0.03504038,
0.02367253,
-0.010633762,
0.05601427,
0.055427,
0.04614256,
-0.01105324,
-0.052071176,
-0.055650722,
0.061747134,
0.028216874,
-0.036047127,
0.009801798,
-0.00090012944,
0.0053553334,
... | Pass your inputs to the model and return the logits of the masked token:
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model")
logits = model(**inputs).logits
mask_token_logits = logits[0, mask_token_index, :]
Then return the three masked toke... |
[
0.028779047,
0.004792818,
-0.0010340164,
0.020499535,
-0.025384594,
-0.032911424,
-0.0029885932,
0.00925357,
-0.03866723,
0.06328438,
0.013120294,
0.0403497,
-0.02005678,
-0.048053626,
-0.06653125,
0.009733221,
0.007851514,
-0.01928934,
-0.009519223,
-0.031583156,
-0.01158541... | Then return the three masked tokens with the highest probability and print them out:
top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist()
for token in top_3_tokens:
print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massi... |
[
0.028726026,
-0.01367572,
0.0011186039,
-0.0056000324,
-0.032569256,
-0.017112184,
0.006995657,
0.013107652,
-0.0055824993,
0.070356324,
0.04045208,
-0.00618914,
-0.021754915,
-0.010021849,
-0.0277021,
0.028473552,
-0.0022003883,
-0.03242899,
-0.027449626,
-0.024658376,
-0.02... | text = "The Milky Way is a galaxy."
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for fill-mask with your model, and pass your text to it. If you like, you can use the top_k parameter to specify how many predictions to return: |
[
0.04681173,
0.0018946378,
-0.0024009794,
0.018324038,
-0.024348581,
-0.034350205,
0.013161196,
0.0066616125,
-0.043217625,
0.06946637,
0.019487703,
0.046487674,
-0.028517153,
-0.053440202,
-0.07588862,
0.01119475,
0.000048131213,
-0.02679375,
-0.0057078497,
-0.025482787,
-0.0... | Then return the three masked tokens with the highest probability and print them out:
top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy()
for token in top_3_tokens:
print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galax... |
[
0.050933138,
-0.015444801,
-0.011908983,
-0.029935148,
-0.018467238,
-0.014056505,
-0.0034002424,
0.014598807,
0.0033243198,
0.05483772,
0.02386135,
-0.027635781,
0.02244413,
-0.024815803,
-0.0030224377,
0.036847707,
0.0015898529,
-0.041301828,
-0.00892993,
-0.0046059634,
-0.... | Now, let's load an image.
thon
from PIL import Image
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg"
image = Image.open(requests.get(url, stream=True).raw)
print(image.size)
bash
(532, 432)
We can now do inference with the pipeline. We wi... |
[
0.056915354,
0.019736776,
-0.04525967,
0.020515677,
-0.01849888,
-0.03126728,
0.02192048,
0.03143419,
-0.029320031,
0.048597813,
0.012267679,
0.0440635,
-0.0063772458,
-0.055246282,
-0.042060614,
0.03677522,
0.031239465,
-0.027539687,
-0.007907229,
-0.010125703,
-0.003651095,... | from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model")
inputs = tokenizer(text, return_tensors="tf")
mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1]
Pass your inputs to the model and return the logits of the masked token: |
[
0.048941836,
0.0056792456,
-0.038783036,
0.01007461,
-0.025509248,
-0.04172965,
0.03280562,
0.028960995,
-0.008699523,
0.050373048,
0.047707066,
0.05056949,
0.0012970358,
-0.047987696,
-0.04860508,
0.054666683,
0.02929775,
-0.03824984,
0.013287821,
0.000038696217,
-0.00288697... | Pass your inputs to the model and return the logits of the masked token:
from transformers import TFAutoModelForMaskedLM
model = TFAutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model")
logits = model(**inputs).logits
mask_token_logits = logits[0, mask_token_index, :]
Then return the three masked ... |
[
0.012672477,
-0.018019576,
-0.037845407,
-0.034232892,
-0.007042256,
-0.0027273064,
-0.03234062,
0.0047700983,
0.019166406,
0.08004877,
0.026348433,
-0.040683813,
0.013948327,
-0.049657762,
-0.007533243,
0.0517794,
-0.014643593,
-0.049342383,
-0.032799352,
-0.010973735,
0.028... | If you wish to do inference yourself with no pipeline, you can use the Swin2SRForImageSuperResolution and Swin2SRImageProcessor classes of transformers. We will use the same model checkpoint for this. Let's initialize the model and the processor.
thon
from transformers import Swin2SRForImageSuperResolution, Swin2SRImag... |
[
0.03905237,
-0.0008041854,
-0.0052060424,
0.0056328247,
0.025226027,
-0.054966778,
-0.04060431,
0.0036223615,
-0.035130203,
0.07105049,
-0.01701487,
-0.037218265,
0.017141847,
-0.016873784,
-0.020400913,
-0.008669683,
-0.005858561,
0.013339601,
-0.042720586,
-0.019046497,
0.0... |
Image-to-Image Task Guide
[[open-in-colab]]
Image-to-Image task is the task where an application receives an image and outputs another image. This has various subtasks, including image enhancement (super resolution, low light enhancement, deraining and so on), image inpainting, and more.
This guide will show you how ... |
[
0.020723315,
-0.0008105431,
-0.028891522,
-0.05172705,
0.0068683834,
0.011705793,
-0.029319873,
0.005605487,
0.010908174,
0.082302436,
0.017857797,
-0.04186021,
0.042155627,
-0.036129173,
0.008537474,
0.075035244,
-0.021122126,
-0.011476847,
-0.040767178,
-0.016971555,
0.0342... | pip install transformers
We can now initialize the pipeline with a Swin2SR model. We can then infer with the pipeline by calling it with an image. As of now, only Swin2SR models are supported in this pipeline.
thon
from transformers import pipeline
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
... |
[
0.023504708,
-0.029094337,
-0.014766986,
-0.0028321054,
-0.02092971,
0.020882607,
-0.018276205,
0.01705151,
0.016313555,
0.04085454,
0.031402413,
-0.02198169,
0.03316095,
-0.038279545,
-0.019202577,
0.0572152,
0.010943742,
-0.02614251,
-0.028953027,
-0.00961699,
0.0022413475,... |
We can now infer the image by passing pixel values to the model.
thon
import torch
with torch.no_grad():
outputs = model(pixel_values)
``
Output is an object of typeImageSuperResolutionOutput` that looks like below 👇
(loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, , 0.7463, 0.7446, 0.7453],
... |
[
0.027447324,
-0.0109340055,
-0.005764083,
-0.0052894787,
-0.015535127,
-0.053561408,
-0.027606733,
0.0255634,
-0.011999147,
0.062024575,
0.013441073,
-0.038519,
0.026534345,
-0.039562404,
0.008984868,
0.049387794,
0.011839738,
-0.08208112,
-0.04625758,
0.013846842,
0.01612928... |
We need to squeeze the output and get rid of axis 0, clip the values, then convert it to be numpy float. Then we will arrange axes to have the shape [1072, 880], and finally, bring the output back to range [0, 255].
thon
import numpy as np
squeeze, take to CPU and clip the values
output = outputs.reconstruction.data.... |
[
0.037678625,
0.012678348,
-0.02355278,
0.002140335,
0.003078664,
-0.01036526,
-0.0022076184,
0.023931019,
0.023931019,
0.00750663,
-0.022738107,
-0.011347231,
0.03945345,
-0.04594174,
-0.0124892285,
0.047454704,
-0.014162217,
-0.072971426,
-0.036194757,
0.014547733,
0.0263168... | Before you begin, make sure you have all the necessary libraries installed:
pip install -q pytorchvideo transformers evaluate
You will use PyTorchVideo (dubbed pytorchvideo) to process and prepare the videos.
We encourage you to log in to your Hugging Face account so you can upload and share your model with the commun... |
[
0.05344578,
0.027418118,
0.004171378,
0.03200953,
-0.020668598,
0.0068219407,
0.032733727,
-0.013129701,
0.017800774,
0.0038635938,
-0.0021056044,
0.004967995,
0.051678736,
-0.04041022,
-0.02856235,
-0.009617344,
0.025216557,
-0.041597907,
-0.048405364,
-0.0024767555,
0.01439... | from huggingface_hub import notebook_login
notebook_login()
Load UCF101 dataset
Start by loading a subset of the UCF-101 dataset. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from huggingface_hub import hf_hub_download
hf_dataset_iden... |
[
0.032647517,
0.0031447657,
-0.008168958,
0.008409638,
-0.005631201,
-0.051958535,
-0.013846171,
0.0011370354,
0.005153381,
0.05798969,
-0.010738569,
-0.0038296415,
0.042189766,
-0.057734855,
0.002721806,
0.028740007,
-0.003790708,
-0.05626246,
-0.04102884,
0.005305575,
-0.019... | pipeline abstracts away the preprocessing and postprocessing steps that we have to do ourselves, so let's preprocess the image. We will pass the image to the processor and then move the pixel values to GPU.
thon
pixel_values = processor(image, return_tensors="pt").pixel_values
print(pixel_values.shape)
pixel_values = ... |
[
0.018308284,
0.012159672,
0.03334281,
0.002475951,
-0.0023659088,
-0.03870737,
-0.0060248147,
0.024635714,
0.012056506,
0.060303167,
0.035130996,
-0.003827408,
0.04641033,
-0.038872432,
-0.012269714,
0.0014486033,
0.018211996,
-0.020165246,
0.020371577,
0.021485753,
0.0170015... | After the subset has been downloaded, you need to extract the compressed archive:
import tarfile
with tarfile.open(file_path) as t:
t.extractall(".")
At a high level, the dataset is organized like so: |
[
0.0356215,
0.007227191,
0.007904851,
0.048592836,
-0.030512441,
-0.007429424,
-0.008777648,
0.028851995,
0.041553687,
0.08759199,
0.023402331,
0.03329404,
0.069085844,
-0.062500834,
-0.04220651,
0.020876186,
-0.0038211509,
-0.07544378,
-0.053872198,
-0.004548482,
-0.002327460... |
UCF101_subset/
train/
BandMarching/
video_1.mp4
video_2.mp4
Archery
video_1.mp4
video_2.mp4
val/
BandMarching/
video_1.mp4
video_2.mp4
Archery
vid... |
[
0.03620215,
0.015986219,
-0.0029867433,
0.034967393,
-0.04600143,
-0.050940473,
-0.01539511,
0.02648169,
0.027663909,
0.021779088,
0.018074805,
0.015408246,
0.075241625,
-0.025995668,
-0.01904685,
-0.0024843004,
-0.001691229,
-0.05501256,
-0.058427855,
0.00014521171,
0.006104... | 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup... |
[
0.019290453,
0.0036082105,
0.019710418,
0.030965516,
-0.016098708,
-0.026219897,
-0.00085480645,
0.02655587,
0.013865887,
0.036481075,
0.014992797,
-0.00005320667,
0.06288295,
-0.026723856,
-0.0043396517,
0.030573547,
0.017162623,
-0.04152067,
-0.05347571,
-0.017344609,
0.049... |
You will notice that there are video clips belonging to the same group / scene where group is denoted by g in the video file paths. v_ApplyEyeMakeup_g07_c04.avi and v_ApplyEyeMakeup_g07_c06.avi, for example.
For the validation and evaluation splits, you wouldn't want to have video clips from the same group / scene to... |
[
0.00090076955,
-0.00019237082,
-0.009598483,
0.021385072,
-0.043236937,
0.009051457,
-0.015593889,
0.003789067,
0.033171657,
0.02736589,
0.022697933,
0.05085154,
0.036906023,
-0.046942126,
-0.0043907957,
0.029539406,
-0.05064732,
-0.042332523,
-0.057911824,
-0.017650707,
-0.0... | Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. |
[
0.028197473,
0.01979637,
-0.0187644,
0.001286328,
-0.004251423,
0.0052434215,
-0.031424195,
0.01611907,
0.015886514,
0.051162425,
0.04497061,
-0.014411233,
0.05392403,
-0.031133497,
0.0043858695,
0.047150824,
-0.028052125,
-0.056220528,
-0.064418145,
-0.0348544,
0.023371926,
... | There are 10 unique classes. For each class, there are 30 videos in the training set.
Load a model to fine-tune
Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model's encoder comes with pre-trained parameters, and the classification head is randomly initial... |
[
0.013141652,
-0.006706938,
0.012202476,
-0.020376025,
-0.02624247,
0.027508315,
-0.042439844,
0.021002142,
0.015421534,
0.07219402,
0.023683557,
-0.019246293,
0.04908213,
-0.05349217,
0.004903449,
0.044644866,
-0.002033179,
-0.052893277,
-0.029672502,
-0.02075714,
-0.01021523... | import pytorchvideo.data
from pytorchvideo.transforms import (
ApplyTransformToKey,
Normalize,
RandomShortSideScale,
RemoveKey,
ShortSideScale,
UniformTemporalSubsample,
)
from torchvision.transforms import (
Compose,
Lambda,
RandomCrop,
RandomHorizontalFlip,
Resi... |
[
0.0063753077,
-0.0029136434,
-0.010383165,
-0.0085272025,
-0.0045814533,
-0.029783098,
0.0038324916,
-0.0046143346,
0.0005169662,
0.056351144,
-0.0062803174,
-0.009140985,
0.042526405,
-0.02012332,
0.007412893,
0.019831043,
-0.0031949608,
-0.031273715,
-0.047962774,
-0.02631960... | label2id: maps the class names to integers.
id2label: maps the integers to class names.
class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths})
label2id = {label: i for i, label in enumerate(class_labels)}
id2label = {i: label for label, i in label2id.items()}
print(f"Unique classes: {lis... |
[
0.032469638,
0.0076864585,
0.0057895705,
-0.008845079,
-0.0011418418,
-0.00293894,
-0.019046592,
-0.00074886455,
0.009784692,
0.029219847,
0.012384524,
-0.025192933,
0.057733215,
-0.039930023,
-0.0006905803,
0.031367533,
-0.02002153,
-0.039760467,
-0.03727367,
-0.0067821695,
... |
For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn ... |
[
0.009804411,
-0.0007697604,
0.006004666,
0.028157353,
0.005754918,
-0.030597752,
-0.011823805,
-0.02578831,
-0.009397677,
0.043213617,
0.02555997,
-0.036705885,
0.053003754,
-0.03987412,
0.0048130094,
0.063536,
-0.0042921053,
-0.0783782,
-0.06838826,
-0.007670845,
0.008741195... | from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
model_ckpt = "MCG-NJU/videomae-base"
image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
model = VideoMAEForVideoClassification.from_pretrained(
model_ckpt,
label2id=label2id,
id2label=id2label,
ignore_m... |
[
0.029559217,
0.032056417,
0.02996637,
-0.024971975,
-0.0032504282,
-0.018783268,
0.005635659,
0.01792825,
-0.013782088,
0.049753945,
0.020683309,
-0.041719485,
0.049835376,
-0.035205055,
0.0014598087,
0.04885821,
-0.04530242,
-0.0803989,
-0.049129646,
0.021212608,
0.005130112... | Start by defining some constants.
mean = image_processor.image_mean
std = image_processor.image_std
if "shortest_edge" in image_processor.size:
height = width = image_processor.size["shortest_edge"]
else:
height = image_processor.size["height"]
width = image_processor.size["width"]
resize_to = (height, ... |
[
0.037952233,
-0.0084863445,
0.00021229089,
0.01986496,
-0.0026700594,
-0.0075833923,
-0.029684572,
-0.013092815,
-0.022475058,
0.03467903,
0.010637912,
-0.041056134,
0.04407538,
-0.04791293,
-0.013791192,
0.03383251,
-0.031405825,
-0.051299002,
-0.04644563,
-0.0016498287,
0.0... |
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [, 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
- This IS expected if you are initializing VideoMAEForVideoClassification ... |
[
-0.03073952,
0.040639613,
0.0102624595,
0.011821315,
-0.0441402,
-0.0033843555,
-0.032134283,
-0.0539856,
0.014918513,
0.050238878,
-0.010994027,
0.046656247,
0.0124776745,
-0.04137802,
-0.017530277,
-0.013250264,
-0.03188815,
0.005466247,
-0.03290004,
0.0001514841,
-0.003572... | The same sequence of workflow can be applied to the validation and evaluation sets: |
[
0.023351863,
0.016998881,
0.03620403,
-0.014274275,
-0.014207821,
-0.004017132,
-0.027538452,
0.00586787,
-0.018088723,
0.04742143,
0.0056884447,
0.0043228194,
0.033412967,
-0.022873396,
-0.018341247,
0.04377976,
-0.07299285,
-0.06650696,
-0.02729922,
0.041573495,
0.008692156... | Image mean and standard deviation with which the video frame pixels will be normalized.
Spatial resolution to which the video frames will be resized.
Start by defining some constants. |
[
0.015516348,
0.02065334,
0.009246584,
-0.011986313,
-0.012229991,
-0.019388849,
0.0016316533,
-0.008943633,
-0.007824033,
0.016859869,
0.016754495,
0.037776645,
0.038382545,
-0.022708137,
-0.021272413,
0.0056309327,
-0.04533724,
-0.015516348,
-0.0946787,
-0.019757658,
0.00438... | Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set: |
[
0.008350454,
0.017025884,
0.0071176672,
0.023426486,
-0.042614162,
-0.019611558,
-0.039449185,
0.018071456,
0.0125468755,
0.030095546,
0.03263883,
-0.02180161,
0.070081644,
-0.0395057,
-0.0155423,
0.026125195,
-0.010914934,
-0.045637846,
-0.065221146,
-0.0026351265,
0.0055528... |
train_transform = Compose(
[
ApplyTransformToKey(
key="video",
transform=Compose(
[
UniformTemporalSubsample(num_frames_to_sample),
Lambda(lambda x: x / 255.0),
Normalize(mean, std),
... |
[
0.010048422,
0.019076629,
0.012990275,
0.02598055,
-0.03245123,
-0.014262051,
-0.035078637,
0.00455603,
0.010174202,
0.026623426,
0.026064405,
-0.037398577,
0.06630002,
-0.05464441,
-0.014548549,
0.021983543,
-0.0119560845,
-0.050926913,
-0.047433022,
-0.007833297,
-0.0004240... |
val_transform = Compose(
[
ApplyTransformToKey(
key="video",
transform=Compose(
[
UniformTemporalSubsample(num_frames_to_sample),
Lambda(lambda x: x / 255.0),
Normalize(mean, std),
... |
[
0.02818155,
-0.007916402,
-0.016858354,
0.034250557,
0.012594596,
0.028476572,
-0.036863603,
0.032255653,
0.024472712,
0.032620918,
0.021845618,
-0.02879969,
0.051614664,
-0.009679506,
-0.017462445,
0.012896641,
0.010065842,
-0.039841913,
-0.062994055,
-0.0016542261,
0.017476... |
Note: The above dataset pipelines are taken from the official PyTorchVideo example. We're using the pytorchvideo.data.Ucf101() function because it's tailored for the UCF-101 dataset. Under the hood, it returns a pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset object. LabeledVideoDataset class is the base ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.