vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.014437427,
-0.023048831,
-0.008311093,
-0.03204313,
0.001231273,
0.036577817,
-0.019955633,
0.0027572257,
0.028874854,
0.024835678,
0.03090195,
0.018559188,
0.022012759,
-0.019370027,
0.010420774,
0.027898844,
0.037959248,
-0.016321875,
-0.050482195,
-0.025391253,
0.0167573... |
Evaluate
Object detection models are commonly evaluated with a set of COCO-style metrics.
You can use one of the existing metrics implementations, but here you'll use the one from torchvision to evaluate the final
model that you pushed to the Hub.
To use the torchvision evaluator, you'll need to prepare a ground trut... |
[
0.02291653,
0.01257579,
0.021755464,
-0.013069243,
-0.0014413539,
-0.049664576,
-0.005464264,
0.006984534,
0.004383022,
0.076920584,
0.013715086,
-0.05494742,
0.043714114,
-0.021029798,
-0.031174608,
-0.0024908483,
0.01880926,
-0.004179836,
-0.04322066,
-0.002610583,
0.014723... |
if not os.path.exists(path_output_cppe5):
os.makedirs(path_output_cppe5)
path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json")
categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label]
output_json["images"] = []
output_json["annotations"... |
[
0.019769361,
0.008471486,
0.018863073,
0.0000684636,
-0.017511321,
0.006497622,
-0.03935439,
-0.022887606,
-0.0015725628,
0.04669686,
0.005760303,
0.0002829749,
0.032933574,
-0.01196607,
-0.0058179065,
0.010698804,
0.029692443,
-0.029508114,
-0.072072916,
-0.01766493,
0.03055... | im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"])
test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno)
Finally, load the metrics and run the evaluation. |
[
0.0026899828,
-0.023901157,
0.0022159284,
-0.012207817,
-0.051418357,
0.047772918,
-0.008687321,
-0.0080185,
0.021387568,
0.0481551,
0.016580876,
0.020931887,
0.030839255,
-0.026106061,
-0.007731863,
0.05021301,
0.01192118,
-0.021608058,
-0.042304754,
0.0025981118,
0.03089805... |
import evaluate
from tqdm import tqdm
model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco)
val_dataloader = torch.utils.data.DataLoader(
test_ds_coco_format, batch_size=8, shuffle=False, num_wo... |
[
0.010540372,
-0.007280277,
-0.021815615,
-0.043141317,
-0.06674368,
-0.007780999,
-0.029798344,
0.024409281,
-0.012788216,
0.058299854,
0.019538952,
-0.022262303,
0.0148847625,
-0.018674398,
0.0071001616,
0.030893447,
0.022867491,
-0.032046188,
-0.045158613,
-0.0033483512,
0.... | Inference
Now that you have finetuned a DETR model, evaluated it, and uploaded it to the Hugging Face Hub, you can use it for inference.
The simplest way to try out your finetuned model for inference is to use it in a [Pipeline]. Instantiate a pipeline
for object detection with your model, and pass an image to it: |
[
-0.010184576,
-0.037651606,
0.0108187245,
-0.019070294,
-0.0022405302,
-0.025534023,
-0.0043473546,
-0.014165195,
-0.025427058,
0.069924414,
0.0021373858,
0.00007664218,
0.015257764,
-0.030469684,
-0.016793473,
0.05317678,
0.01048255,
-0.031569894,
-0.06460673,
-0.027367705,
... |
labels = [
{k: v for k, v in t.items()} for t in batch["labels"]
] # these are in DETR format, resized + normalized
# forward pass
outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask)
orig_target_sizes = torch.stack([target["orig_size"] for targe... |
[
0.029394126,
0.00507897,
-0.032023586,
-0.010635356,
-0.035696015,
-0.007858999,
0.004803538,
-0.008490657,
0.041395627,
0.052295398,
0.033668835,
0.03687119,
0.01197212,
-0.033022486,
-0.016893176,
-0.009893525,
0.023121616,
-0.034608975,
-0.06663256,
-0.022827823,
0.0049724... |
results = module.compute()
print(results)
Accumulating evaluation results
DONE (t=0.08s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681
Average Precision (AP) @[ IoU=0.75 | area= ... |
[
0.017857753,
0.006552815,
-0.02400923,
-0.03023174,
-0.0070820125,
-0.0419096,
-0.021665134,
0.014547604,
0.0077497247,
0.08705832,
0.0032888383,
-0.03352768,
0.055405915,
-0.013055907,
0.00534525,
-0.002642436,
0.014348711,
-0.03222067,
-0.051797427,
-0.019548343,
0.00543759... | from transformers import pipeline
import requests
url = "https://i.imgur.com/2lnWoly.jpg"
image = Image.open(requests.get(url, stream=True).raw)
obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5")
obj_detector(image)
You can also manually replicate the results of the pipeline if... |
[
0.043930218,
0.015401317,
0.0060096597,
-0.012255273,
0.0004209622,
-0.014529005,
0.03632251,
0.0004468813,
0.010410547,
0.00530895,
-0.047019064,
-0.017460546,
0.037523728,
-0.0639505,
-0.0007427167,
0.03137464,
-0.009030578,
-0.040555373,
-0.022322616,
0.008708823,
0.030144... | Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate seqeval
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import noteboo... |
[
0.006229527,
-0.038406953,
0.0042243297,
-0.028602157,
-0.018768353,
-0.05595696,
0.004155435,
0.010559011,
0.02075542,
0.08377589,
0.007288329,
0.016230129,
0.02916782,
0.011509033,
-0.003934247,
0.018681329,
-0.01798513,
-0.037014555,
-0.04386051,
0.02293104,
-0.017114881,
... | Let's plot the result:
draw = ImageDraw.Draw(image)
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
x, y, x2, y2 = tuple(box)
draw.rectangle((x, y, x2, y2), outline="red", width=1)
draw.text((x, y), model.config.id2labe... |
[
0.0022809948,
-0.019111944,
0.0083046835,
-0.025413075,
-0.018098995,
-0.0073029073,
-0.023238217,
-0.0059659653,
-0.01583476,
0.074421875,
0.02627706,
0.0023796828,
0.033129353,
-0.033159148,
-0.010427405,
0.022925394,
0.00550418,
-0.041650034,
-0.067152485,
-0.022195475,
0.... |
image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
with torch.no_grad():
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
target... |
[
0.011261115,
-0.025049662,
0.046280093,
-0.0065643033,
-0.013269019,
-0.039147116,
0.040523164,
-0.00060509186,
0.018520461,
0.058074776,
-0.0011241809,
-0.005265484,
0.052711003,
-0.01784648,
-0.028686356,
0.018969784,
-0.008389671,
-0.02666441,
-0.05052056,
0.02460034,
0.01... | from datasets import load_dataset
wnut = load_dataset("wnut_17")
Then take a look at an example:
wnut["train"][0]
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for',... |
[
0.013041225,
-0.027332978,
0.02087617,
-0.027230894,
0.0019523556,
-0.015886815,
0.021514194,
-0.010546548,
0.013781334,
0.04721383,
-0.027026728,
-0.020761324,
0.056962848,
-0.015210509,
-0.0021070766,
0.028226214,
-0.0054008793,
-0.01186726,
-0.030242372,
0.01781365,
0.0353... | Load WNUT 17 dataset
Start by loading the WNUT 17 dataset from the 🤗 Datasets library:
from datasets import load_dataset
wnut = load_dataset("wnut_17")
Then take a look at an example: |
[
0.021126261,
-0.028932186,
0.019041725,
0.0006671256,
-0.012640275,
-0.010548347,
0.030691475,
0.0135938395,
-0.032406412,
0.031844623,
0.01303205,
0.010481819,
-0.010104828,
-0.00838989,
-0.021288885,
0.017652033,
-0.010622266,
-0.006401449,
-0.053843137,
0.002252704,
-0.024... | The letter that prefixes each ner_tag indicates the token position of the entity:
B- indicates the beginning of an entity.
I- indicates a token is contained inside the same entity (for example, the State token is a part of an entity like
Empire State Building).
0 indicates the token doesn't correspond to any entity.... |
[
0.013428873,
-0.008325902,
0.010735638,
-0.008049863,
0.003983899,
-0.0035698423,
0.0344376,
0.0029524872,
0.03040894,
0.013563163,
0.021799538,
-0.010392456,
0.024918022,
0.006091486,
-0.01874074,
0.03088641,
0.013399032,
-0.032020405,
-0.047806792,
-0.040764093,
0.022858927... | Each number in ner_tags represents an entity. Convert the numbers to their label names to find out what the entities are:
label_list = wnut["train"].features[f"ner_tags"].feature.names
label_list
[
"O",
"B-corporation",
"I-corporation",
"B-creative-work",
"I-creative-work",
"B-group",
"I-gr... |
[
0.013162199,
0.0027530703,
0.029697904,
-0.0064774062,
-0.04559762,
-0.057294283,
0.029836163,
0.020946147,
-0.011406318,
0.057349585,
-0.0123188235,
0.019591216,
0.027333686,
-0.040039632,
-0.049634766,
-0.009152706,
-0.026960388,
-0.045569967,
-0.04573588,
0.013037767,
0.00... | example = wnut["train"][0]
tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
tokens
['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire'... |
[
0.00080020836,
-0.018197997,
0.015853116,
0.021880753,
-0.030728001,
-0.04770321,
0.0415461,
0.003472365,
-0.0036791603,
0.03947455,
0.04994739,
0.00442722,
-0.003855386,
-0.04281205,
-0.029174339,
0.028426278,
0.0016660518,
-0.031706233,
-0.038611405,
-0.012904035,
0.0085091... | However, this adds some special tokens [CLS] and [SEP] and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You'll need to realign the tokens and labels by: |
[
0.03472075,
-0.018049277,
0.006586685,
-0.007673622,
-0.008511788,
-0.02112638,
0.010295283,
-0.006839283,
-0.033587884,
0.07005385,
0.037966248,
0.0021470832,
0.016196892,
-0.021356015,
-0.030954741,
0.02588747,
-0.027142806,
-0.025336348,
-0.050029717,
-0.0020915882,
-0.016... | Mapping all tokens to their corresponding word with the word_ids method.
Assigning the label -100 to the special tokens [CLS] and [SEP] so they're ignored by the PyTorch loss function (see CrossEntropyLoss).
Only labeling the first token of a given word. Assign -100 to other subtokens from the same word.
Here is how y... |
[
0.010505444,
-0.02683791,
0.0409012,
0.008607461,
-0.021305041,
-0.03230074,
0.017172901,
-0.0026211084,
-0.019203953,
0.07900094,
0.041321415,
0.018797742,
0.0020170454,
-0.029415246,
-0.02479285,
0.03980863,
-0.006740994,
-0.06936395,
-0.04773674,
-0.0015425495,
-0.00044582... | Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT's maximum input length:
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) |
[
0.0043708407,
-0.037610553,
-0.006321728,
-0.027077891,
-0.01664473,
-0.016985869,
0.011193616,
-0.0066948486,
-0.013581588,
0.07334484,
0.027291102,
0.019729193,
0.02009876,
-0.021577029,
-0.022643087,
0.025912333,
-0.019700766,
-0.037411552,
-0.0369567,
0.014249652,
-0.0435... | Preprocess
The next step is to load a DistilBERT tokenizer to preprocess the tokens field:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
As you saw in the example tokens field above, it looks like the input has already been tokenized. But the i... |
[
0.008120917,
-0.009764354,
0.02144032,
0.022678055,
0.020573905,
-0.044806004,
0.025194783,
-0.007845865,
-0.0024204594,
0.024905978,
0.0042289277,
-0.012129802,
0.016901959,
-0.03366639,
-0.027243922,
0.03245616,
-0.0300357,
-0.017562084,
-0.036114354,
0.010885191,
0.0306683... | To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] function. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once:
tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) |
[
0.020319141,
-0.023596901,
0.034201417,
0.032866582,
-0.012858902,
-0.018227901,
0.028876912,
0.007957094,
-0.009121367,
0.037879627,
-0.0019837117,
0.0043567503,
0.002295173,
-0.035951532,
-0.065139905,
0.013563398,
-0.024264319,
-0.06000821,
-0.03980772,
0.05063471,
0.01499... | tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True)
Now create a batch of examples using [DataCollatorWithPadding]. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. |
[
-0.0042900746,
0.0012248551,
-0.007622103,
0.028008763,
-0.011926266,
-0.011905134,
0.0019900592,
0.010172197,
-0.015046962,
0.054721348,
-0.009876329,
-0.00076872646,
0.030150278,
-0.037025668,
-0.015610518,
0.008876016,
-0.026712583,
-0.055763926,
-0.049762048,
-0.018977769,
... | from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
</pt>
<tf>py
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") |
[
0.016949985,
-0.0046494254,
-0.025231551,
-0.00611446,
-0.058916602,
0.032381207,
0.013676045,
-0.0020775553,
0.009470787,
0.024156954,
0.00625774,
0.0043736123,
0.026134213,
-0.026392117,
0.010652844,
0.025919294,
-0.021205392,
-0.04109261,
-0.0671122,
-0.023225635,
0.022924... | Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the seqeval framework (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric). Seqeval actually p... |
[
0.008355586,
0.0056296755,
-0.016072342,
0.000022302847,
-0.021589497,
-0.01782912,
0.0014881799,
-0.00024795436,
0.00179489,
0.043295145,
0.021966986,
-0.00618865,
0.02823912,
-0.02095067,
0.010838305,
0.027484141,
-0.03272544,
-0.054910205,
-0.05177414,
-0.03452577,
0.02006... | import evaluate
seqeval = evaluate.load("seqeval")
Get the NER labels first, and then create a function that passes your true predictions and true labels to [~evaluate.EvaluationModule.compute] to calculate the scores:
import numpy as np
labels = [label_list[i] for i in example[f"ner_tags"]]
def compute_metrics(p):
... |
[
0.0008539312,
-0.022780908,
0.013486466,
-0.011065818,
-0.006774991,
-0.046013482,
0.015822427,
-0.006870264,
-0.005903417,
0.05380472,
0.014375684,
-0.0014652682,
0.023274917,
-0.0039097345,
-0.040452342,
0.029809961,
-0.05256264,
-0.03655672,
-0.067975745,
-0.01474972,
0.00... |
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if w... |
[
-0.018740641,
-0.0051907017,
0.020378312,
-0.0163767,
0.00045302938,
-0.024864104,
0.004475111,
0.015579226,
0.013421774,
0.04235157,
0.011534893,
-0.05804472,
0.03910471,
-0.01030308,
-0.017273858,
0.017117212,
-0.023055546,
0.00023786267,
-0.053715575,
-0.017943168,
0.02221... |
true_predictions = [
[label_list[p] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
true_labels = [
[label_list[l] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labe... |
[
-0.01079469,
-0.0148826605,
0.023874834,
-0.039288048,
-0.014923472,
-0.07890259,
0.012467969,
-0.009896833,
0.02675886,
0.061707266,
0.020909186,
-0.0012311524,
0.0026255515,
-0.025969835,
-0.00492461,
0.031288955,
-0.01949438,
0.011753765,
-0.025276035,
-0.023766004,
0.0112... |
id2label = {
0: "O",
1: "B-corporation",
2: "I-corporation",
3: "B-creative-work",
4: "I-creative-work",
5: "B-group",
6: "I-group",
7: "B-location",
8: "I-location",
9: "B-person",
10: "I-person",
11: "B-product",
12: "I-product",
}
label2id = {
... |
[
-0.00426441,
-0.0057468875,
0.0037812323,
-0.017496893,
-0.023426803,
0.011244865,
0.019078203,
0.0009883182,
-0.01328739,
0.058742706,
0.024715276,
0.006746187,
0.02983989,
-0.01346309,
0.017789729,
0.067469195,
-0.0028441602,
-0.0051612174,
-0.054818716,
-0.021098765,
-0.01... | You're ready to start training your model now! Load DistilBERT with [AutoModelForTokenClassification] along with the number of expected labels, and the label mappings:
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
model = AutoModelForTokenClassification.from_pretrained(
"dis... |
[
0.04111145,
0.013784169,
-0.007465512,
0.022980921,
-0.031410668,
-0.0060739466,
-0.0048759584,
-0.011914139,
0.0068409513,
0.01776529,
0.0029438369,
0.010080633,
0.0056320056,
-0.04786109,
-0.0048394343,
0.022425756,
-0.0020709126,
-0.04143286,
-0.08286572,
-0.010146376,
0.0... |
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will e... |
[
-0.017330373,
0.0032762852,
-0.01014748,
0.013236747,
-0.028426798,
-0.043803867,
0.015030738,
0.0019636932,
0.015141564,
0.047987536,
0.016706977,
0.004114405,
-0.0021524434,
-0.020890648,
-0.0099950945,
0.043886986,
-0.0050252536,
-0.014961472,
-0.071815066,
-0.033164598,
0... | Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
Before you start training your model, create a map of the expected ids to their labels with id2label and label2id: |
[
0.024683893,
0.032738637,
-0.04027372,
-0.0053229155,
-0.032218978,
-0.03484615,
0.0002438166,
0.0012747932,
-0.021421578,
0.02547782,
0.011425323,
0.01987703,
0.017076634,
-0.053034283,
0.0066328943,
0.042121403,
0.0033074252,
-0.036809314,
-0.07350315,
0.009476594,
0.014225... | Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up a... |
[
-0.0027962734,
-0.009085182,
0.008110996,
-0.009943909,
-0.027594708,
0.0089047775,
0.032270797,
0.0091717765,
-0.0028954959,
0.059172753,
0.028893622,
-0.003386197,
0.034810897,
-0.019945547,
0.029903889,
0.054698713,
-0.006895069,
-0.031202802,
-0.06488797,
-0.010564501,
0.... | If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load DistilBERT with [AutoModelForTokenClassification] along with the number of expected labels, and the label mappings: |
[
0.036977194,
0.037404843,
-0.016549932,
-0.022152105,
-0.010883612,
-0.024946066,
-0.011175837,
0.010341926,
-0.046642013,
0.04478888,
0.049464483,
-0.009985553,
0.01549507,
-0.042679153,
0.008666975,
0.04327786,
-0.029735709,
-0.047668368,
-0.07384035,
0.015266992,
0.0134423... | To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_train_epochs = 3
num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs
optimizer, lr_schedul... |
[
-0.012631074,
-0.01971127,
0.005625217,
0.0055331746,
-0.003717104,
-0.018450996,
0.025545355,
-0.0020798082,
-0.0121567,
0.044633567,
0.017856259,
-0.0049632187,
0.05686815,
-0.008092666,
-0.002968373,
0.050637577,
-0.0005230496,
-0.019470545,
-0.03256891,
-0.042736076,
-0.0... | Then you can load DistilBERT with [TFAutoModelForTokenClassification] along with the number of expected labels, and the label mappings:
from transformers import TFAutoModelForTokenClassification
model = TFAutoModelForTokenClassification.from_pretrained(
"distilbert/distilbert-base-uncased", num_labels=13, id2labe... |
[
0.03887087,
0.031751763,
-0.03789639,
-0.029965218,
-0.02403714,
-0.023739383,
0.016065363,
0.014075803,
-0.05229702,
0.029180221,
0.017215788,
-0.03107504,
0.033754855,
-0.051999263,
-0.0025749244,
0.03670536,
-0.008195095,
-0.03107504,
-0.09170926,
0.020409914,
0.0037964063... | Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument! |
[
0.028713625,
0.036413215,
0.003102767,
0.016219169,
-0.04694803,
-0.03385595,
0.031298686,
0.021333696,
-0.007220101,
0.0220703,
0.0030089545,
0.02685127,
0.015718834,
-0.04152774,
-0.01506562,
0.03744168,
-0.01642764,
-0.013140723,
-0.053619154,
0.011528535,
0.026281446,
0... |
training_args = TrainingArguments(
output_dir="my_awesome_wnut_model",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,... |
[
0.023536487,
0.03680302,
-0.03995435,
-0.0058841216,
-0.02279086,
-0.010030792,
-0.007681364,
0.0055148257,
-0.021369949,
0.017782498,
-0.011142198,
-0.013576037,
0.029825076,
-0.06685319,
-0.012014441,
0.03815359,
-0.0119722355,
-0.041698832,
-0.091219716,
0.011001514,
0.024... | import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks.
Pass your compute_metrics function to [~... |
[
0.00645953,
0.041800715,
-0.037402134,
0.0051435027,
-0.024660435,
0.010237345,
0.021524671,
0.0013763304,
-0.003777814,
0.028136734,
-0.042056117,
0.022603035,
0.01589165,
-0.06770978,
-0.010499841,
0.05457079,
-0.016402453,
-0.05113706,
-0.06209095,
0.01767946,
0.02115576,
... | from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_... |
[
0.006942015,
0.0065140366,
-0.013311146,
0.035505373,
-0.013243747,
-0.012259734,
0.022713196,
0.011363338,
0.002101139,
0.026568372,
-0.0026184202,
0.02030034,
0.049389403,
-0.043781873,
-0.034346126,
0.018237954,
-0.018359272,
-0.05106088,
-0.046666518,
-0.039360553,
-0.004... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
tokenized_wnut["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_wnu... |
[
0.025302399,
0.00055848865,
-0.035103444,
0.014156256,
-0.023935487,
0.0106808115,
-0.010622645,
0.005413114,
0.0012478518,
0.0466204,
0.030042963,
0.029766673,
0.00031946096,
-0.035743274,
-0.014025382,
0.039087843,
-0.0037626412,
-0.032253288,
-0.05031397,
-0.0038971512,
-0... | model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks)
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding
PyTorch note... |
[
0.01949014,
-0.0103803035,
-0.023864591,
-0.010690702,
-0.04386003,
0.00013568655,
0.0048328326,
0.013311042,
0.01700695,
0.042964928,
0.029148584,
-0.03103985,
0.004533262,
0.0004994348,
-0.01726682,
0.042676184,
-0.022882866,
-0.046660837,
-0.05971201,
-0.009737851,
-0.0036... | Inference
Great, now that you've finetuned a model, you can use it for inference!
Grab some text you'd like to run inference on:
text = "The Golden State Warriors are an American professional basketball team based in San Francisco."
The simplest way to try out your finetuned model for inference is to use it in a [pip... |
[
0.02390004,
0.02993987,
0.00078500644,
0.004632825,
-0.030488946,
-0.023061173,
0.015679158,
0.020010754,
-0.013513359,
0.062259067,
0.010920503,
-0.04526823,
0.03904537,
-0.042339828,
-0.019187141,
0.0027892275,
-0.029558567,
-0.043743018,
-0.055334613,
-0.018638065,
0.01778... |
from transformers import pipeline
classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model")
classifier(text)
[{'entity': 'B-location',
'score': 0.42658573,
'index': 2,
'word': 'golden',
'start': 4,
'end': 10},
{'entity': 'I-location',
'score': 0.35856336,
'index': 3,
'word': 'state',
'st... |
[
0.0124655375,
0.053106796,
-0.04911339,
0.01662534,
-0.019079622,
0.013082575,
0.0008193942,
-0.004191,
0.017831681,
0.027815204,
0.0049016327,
0.03785419,
0.00074876426,
-0.051276483,
-0.016445082,
0.048946995,
0.01436518,
-0.03147583,
-0.07016198,
0.012611131,
-0.0015460595... | Then bundle your callbacks together:
callbacks = [metric_callback, push_to_hub_callback]
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_validation_set... |
[
0.019160945,
-0.0041022766,
-0.006428235,
0.014377989,
-0.034740135,
-0.0095367925,
0.0263827,
0.031420454,
0.010898151,
0.035613734,
0.018462066,
0.0031649775,
0.025188781,
-0.043621726,
-0.022014704,
0.046504606,
0.021738064,
-0.0033087574,
-0.010315752,
-0.013708229,
0.021... | Pass your inputs to the model and return the logits:
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
with torch.no_grad():
logits = model(**inputs).logits
Get the class with the highest probability, and use the mod... |
[
0.019752065,
-0.008228768,
-0.011606416,
0.028351089,
-0.00011623654,
-0.031313144,
0.025026334,
0.020794827,
-0.026945625,
0.05095942,
0.04022953,
-0.0051458133,
0.013480368,
-0.019208014,
-0.0027448116,
0.008568799,
0.0001124584,
-0.028683564,
-0.0055425167,
-0.023167495,
-... | Get the class with the highest probability, and use the model's id2label mapping to convert it to a text label:
predictions = torch.argmax(logits, dim=2)
predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]]
predicted_token_class
['O',
'O',
'B-location',
'I-location',
'B-group',
'O',
... |
[
0.043709427,
-0.007444619,
-0.028033532,
0.029692661,
-0.02371408,
-0.004351634,
0.011141899,
0.036500804,
0.019351719,
0.034956098,
-0.0020345766,
0.0074231653,
0.018522156,
-0.064019434,
-0.016791513,
0.024858305,
0.0029964414,
-0.022970334,
-0.028920308,
0.014803422,
0.017... | You can also manually replicate the results of the pipeline if you'd like:
Tokenize the text and return PyTorch tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
inputs = tokenizer(text, return_tensors="pt")
Pass your inputs to the model and r... |
[
0.007038277,
-0.007821117,
-0.004183642,
0.019443559,
-0.026012132,
-0.04182914,
0.031371854,
0.027322933,
0.0035628318,
0.022691434,
0.0094232075,
-0.0037976839,
0.043722518,
-0.025065443,
-0.012510874,
0.046548024,
0.020870876,
-0.015933523,
-0.01181178,
-0.010697599,
0.018... | from transformers import TFAutoModelForTokenClassification
model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
logits = model(**inputs).logits
Get the class with the highest probability, and use the model's id2label mapping to convert it to a text label: |
[
0.022266947,
-0.028363848,
0.016143536,
-0.0243611,
0.030351968,
-0.012299837,
-0.017270137,
-0.021392172,
0.019735407,
0.047714885,
0.0031462002,
-0.015441067,
0.057788026,
-0.01606401,
0.007475332,
0.02942418,
-0.0074687046,
-0.032048497,
-0.07682096,
-0.020888517,
0.016899... | Load IMDb dataset
Start by loading the IMDb dataset from the 🤗 Datasets library:
from datasets import load_dataset
imdb = load_dataset("imdb")
Then take a look at an example: |
[
0.035297625,
0.025954947,
-0.023522535,
0.025581794,
-0.03261644,
-0.014635937,
0.01576922,
0.03471716,
-0.016128553,
0.03126203,
-0.028525565,
0.030294593,
0.0132884355,
-0.064127244,
-0.004892466,
0.0138412565,
0.0030042368,
-0.033722084,
-0.029741772,
-0.012376281,
0.00765... | Tokenize the text and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
inputs = tokenizer(text, return_tensors="tf")
Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForTokenClassifica... |
[
0.0070808604,
-0.032526083,
0.026429294,
-0.013088863,
0.0059488104,
-0.027169194,
0.035840843,
-0.024313174,
-0.033118006,
0.0633356,
0.00950034,
-0.00768758,
0.025038278,
-0.0037771987,
0.01108373,
0.015130992,
-0.015020007,
-0.04318067,
-0.05907376,
-0.00034682898,
-0.0263... | There are two fields in this dataset:
text: the movie review text.
label: a value that is either 0 for a negative review or 1 for a positive review.
Preprocess
The next step is to load a DistilBERT tokenizer to preprocess the text field:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretraine... |
[
0.02311313,
0.0033106748,
-0.003083547,
0.038681,
-0.0077531384,
-0.045671914,
0.027301518,
0.026331414,
-0.026793368,
0.053987097,
0.036894776,
-0.0039035166,
0.01079434,
-0.01710772,
-0.0070101614,
0.015952833,
0.006463515,
-0.023667475,
-0.00651741,
-0.031690087,
-0.005316... | Get the class with the highest probability, and use the model's id2label mapping to convert it to a text label:
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_token_class
['O',
'O',
'B-loc... |
[
0.04656336,
0.0159334,
-0.004529014,
-0.0155987255,
0.015016684,
-0.011713595,
0.030033369,
-0.004823673,
0.014740214,
0.007930323,
-0.03102284,
0.002437301,
0.035271745,
-0.060066737,
0.0032758052,
0.02827269,
-0.0072245966,
-0.047989365,
-0.045282867,
-0.0072136833,
0.02655... | Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate accelerate
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import note... |
[
0.023234071,
-0.009567379,
0.049608633,
0.018453855,
-0.01778685,
-0.02327576,
-0.01639725,
0.005190147,
-0.02044098,
0.086766474,
0.01260365,
0.0003756256,
-0.0105887335,
-0.004568302,
-0.013819548,
0.036379665,
-0.060586456,
-0.08048549,
-0.039547946,
0.025123924,
-0.005964... | Create a preprocessing function to tokenize text and truncate sequences to be no longer than DistilBERT's maximum input length:
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True) |
[
0.032773677,
-0.008390412,
0.015350806,
0.02470429,
0.010944015,
-0.015452949,
-0.0112504475,
-0.0074528744,
-0.020107804,
0.05127636,
0.0022763552,
0.01433666,
0.0073434343,
-0.022354975,
-0.03554616,
0.0037647411,
-0.06659798,
-0.06362121,
-0.075703405,
-0.006274569,
0.0047... | To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] function. You can speed up map by setting batched=True to process multiple elements of the dataset at once:
py
tokenized_imdb = imdb.map(preprocess_function, batched=True)
Now create a batch of examples using [DataColla... |
[
0.052041985,
-0.012648721,
0.016789034,
0.0048236647,
-0.016078882,
-0.043225396,
-0.021894079,
-0.024466699,
-0.032023776,
0.019937815,
0.026128184,
0.015181145,
0.012474533,
-0.048960198,
-0.012253448,
-0.083663784,
0.015502723,
-0.02403793,
-0.059223883,
0.013579956,
-0.00... |
imdb["test"][0]
{
"label": 0,
"text": "I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboa... |
[
0.014146082,
0.013849163,
-0.03206728,
-0.015637748,
-0.041483864,
0.010858762,
0.0331984,
-0.015722582,
0.01918664,
0.037298717,
0.038967118,
0.010250784,
0.012979614,
-0.012718042,
0.025082609,
0.03823189,
-0.0053127343,
-0.03478197,
-0.0667927,
-0.005740439,
0.022155832,
... | Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the accuracy metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
acc... |
[
-0.0009571185,
0.022389216,
-0.02893081,
-0.009511312,
-0.02572844,
-0.0010811419,
0.010619824,
-0.0035000257,
0.017175103,
0.04067283,
0.039331667,
0.0039242716,
0.007951183,
-0.0025950826,
0.023867235,
0.046064857,
-0.01427381,
-0.035226066,
-0.056109898,
-0.031038351,
0.02... | import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=... |
[
0.014834439,
0.011675381,
0.025134213,
0.017060297,
-0.020558072,
-0.00038883375,
0.005585381,
0.004206317,
-0.032765724,
0.06796468,
-0.023751693,
0.012055574,
-0.00084852165,
-0.04341113,
-0.011599342,
0.0072167544,
-0.010693792,
-0.06630566,
-0.053033467,
-0.0030277187,
-0... | from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
</pt>
<tf>py
from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") |
[
0.0035536883,
0.005173671,
0.028165318,
-0.019887444,
-0.010692254,
-0.025728922,
0.017480403,
-0.00372064,
-0.02860563,
0.08712315,
0.0020492873,
-0.013304774,
0.017641852,
0.0060212724,
0.0094447015,
0.03228958,
-0.0014952276,
-0.000053376414,
-0.069569364,
-0.0045095333,
0... | id2label = {0: "NEGATIVE", 1: "POSITIVE"}
label2id = {"NEGATIVE": 0, "POSITIVE": 1}
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load DistilBERT with [AutoModelForSequenceClassification] along with the number ... |
[
0.0015779376,
0.01270204,
-0.0074541485,
-0.0017135973,
-0.02534696,
-0.039726898,
0.022048287,
0.012752021,
-0.010267304,
0.05940471,
0.011402562,
0.0062617706,
0.004148334,
-0.016993174,
-0.0043768133,
0.036842342,
-0.010809943,
-0.011559642,
-0.08448035,
-0.02066313,
0.021... | Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
Before you start training your model, create a map of the expected ids to their labels with id2label and label2id:
id2label = {0: "NEGATIVE", 1: "POSITIVE"}
label2id = {"NEGATIVE": 0, "POSITIVE": 1}
If you ar... |
[
0.03801871,
0.0349727,
-0.0033051318,
0.021209251,
-0.048115667,
-0.02834481,
0.019742653,
0.018840132,
-0.005520892,
0.020010589,
0.014376882,
0.03474707,
0.006123748,
-0.04255952,
-0.008080386,
0.028528135,
-0.026384646,
-0.020927213,
-0.07564256,
-0.007904112,
0.011838541,... |
training_args = TrainingArguments(
output_dir="my_awesome_model",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
... |
[
0.01997266,
0.009301213,
0.010023564,
0.011066134,
-0.051979553,
-0.012697011,
0.02692809,
-0.0003756044,
-0.008474603,
0.031783488,
-0.005436257,
0.0065719136,
-0.02356208,
-0.036609095,
-0.011036347,
0.013389576,
-0.012250196,
-0.05266467,
-0.06475848,
0.021298213,
0.006326... | [Trainer] applies dynamic padding by default when you pass tokenizer to it. In this case, you don't need to specify a data collator explicitly.
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren... |
[
0.021227868,
0.035924084,
-0.023508314,
-0.009734131,
-0.029927352,
-0.036177468,
-0.009508902,
0.002241736,
-0.051267833,
0.037106536,
0.011191084,
0.012282039,
0.015498595,
-0.049184464,
0.0055533117,
0.023339393,
-0.008579831,
-0.021241944,
-0.06587959,
0.02633776,
0.02747... | trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: |
[
-0.006674885,
-0.0028207595,
0.016913602,
-0.018009044,
-0.00063124765,
0.0139559135,
0.008193895,
-0.018505644,
-0.007580449,
0.07057559,
0.017833773,
-0.012327358,
0.026319787,
-0.016504638,
0.015891192,
0.048316233,
-0.007799537,
-0.012561052,
-0.06560959,
-0.025998456,
-0... | You're ready to start training your model now! Load DistilBERT with [AutoModelForSequenceClassification] along with the number of expected labels, and the label mappings:
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
model = AutoModelForSequenceClassification.from_pretrained(
... |
[
0.03679155,
0.041184995,
-0.017871397,
-0.01995474,
-0.01452671,
-0.028486526,
-0.022108944,
0.0109836105,
-0.047562577,
0.04540837,
0.049518365,
-0.011068645,
0.018140672,
-0.039342582,
0.007638924,
0.03608293,
-0.03157611,
-0.04274396,
-0.07624751,
0.008716026,
0.010360025,... | To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
import tensorflow as tf
batch_size = 16
num_epochs = 5
batches_per_epoch = len(tokenized_imdb["train"]) // batch_size
total_train_steps = in... |
[
0.04426768,
0.02668692,
-0.014951725,
0.017404513,
-0.03266467,
-0.009025378,
0.008401166,
-0.012528311,
0.017595448,
0.023778824,
0.014386262,
0.021737281,
-0.005856582,
-0.030285321,
0.0026694264,
0.031548433,
0.010831922,
-0.025776304,
-0.07678548,
0.00057051185,
0.0055187... |
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will e... |
[
0.009932582,
0.0037281578,
-0.011824176,
0.032989383,
-0.012931617,
-0.011837932,
0.015641754,
0.0093960585,
0.0025123793,
0.028889783,
-0.0044056918,
0.027569108,
0.04985551,
-0.04352727,
-0.03001786,
0.010634191,
-0.021859936,
-0.0496354,
-0.061356395,
-0.049360257,
-0.0023... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
tokenized_imdb["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_imd... |
[
0.03887087,
0.031751763,
-0.03789639,
-0.029965218,
-0.02403714,
-0.023739383,
0.016065363,
0.014075803,
-0.05229702,
0.029180221,
0.017215788,
-0.03107504,
0.033754855,
-0.051999263,
-0.0025749244,
0.03670536,
-0.008195095,
-0.03107504,
-0.09170926,
0.020409914,
0.0037964063... | Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument! |
[
0.0046281894,
0.03742303,
-0.040887073,
0.009795861,
-0.022828616,
0.013849076,
0.01960592,
-0.00097071304,
-0.010548297,
0.032340538,
-0.035691008,
0.028195044,
0.0086175185,
-0.064283565,
-0.009859747,
0.048383035,
-0.018285608,
-0.05812211,
-0.076208964,
0.006495082,
0.006... | from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_... |
[
-0.016061816,
-0.012693336,
0.016905695,
0.008621623,
0.009817117,
-0.010850868,
0.012812885,
-0.017454216,
-0.015963364,
0.05434576,
0.012805853,
-0.015822718,
0.05094212,
-0.0140998,
-0.0038326138,
0.028396504,
-0.009908537,
-0.028621538,
-0.04047803,
-0.044613034,
0.001478... | Then you can load DistilBERT with [TFAutoModelForSequenceClassification] along with the number of expected labels, and the label mappings:
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained(
"distilbert/distilbert-base-uncased", num_labels=2,... |
[
0.0124655375,
0.053106796,
-0.04911339,
0.01662534,
-0.019079622,
0.013082575,
0.0008193942,
-0.004191,
0.017831681,
0.027815204,
0.0049016327,
0.03785419,
0.00074876426,
-0.051276483,
-0.016445082,
0.048946995,
0.01436518,
-0.03147583,
-0.07016198,
0.012611131,
-0.0015460595... | Then bundle your callbacks together:
callbacks = [metric_callback, push_to_hub_callback]
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_validation_set... |
[
0.021904124,
-0.0022658822,
-0.03599659,
0.020230196,
-0.020387575,
0.020430496,
-0.014149693,
-0.009986335,
0.011581574,
0.047814228,
0.025938718,
0.014922274,
0.004674835,
-0.028828746,
-0.00793327,
0.025065986,
-0.010658767,
-0.023377752,
-0.04712749,
-0.006896008,
0.00321... | model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks)
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for text classification, take a look at the corresponding
PyTorch noteb... |
[
0.013667959,
0.047623646,
-0.052405357,
-0.009895105,
-0.030984392,
-0.005103027,
0.0062293555,
0.0016005263,
-0.0141447475,
0.02560842,
0.00017814816,
0.0060738805,
0.026852218,
-0.048369925,
-0.010095494,
0.041128255,
-0.0018380572,
-0.02571898,
-0.08584971,
0.015395456,
0.... | import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks.
Pass your compute_metrics function to [~transf... |
[
0.025877742,
0.00013031499,
-0.031994827,
-0.005089096,
-0.020240143,
-0.003158436,
0.044693954,
0.019731596,
-0.005979052,
0.037603367,
0.0068290504,
-0.017246984,
0.01312776,
-0.034174312,
-0.011907249,
0.016883738,
-0.008659817,
-0.03734183,
-0.037748665,
-0.001494763,
-0.... | The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for sentiment analysis with your model, and pass your text to it:
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="stevhliu/my_awesome_model")
classifier(text)
[{'labe... |
[
0.04099891,
0.028638305,
-0.02924763,
0.027666286,
-0.01523314,
-0.02337199,
0.033454876,
0.032961614,
-0.028783381,
0.04706315,
0.0012576408,
0.017191688,
0.0073264153,
-0.039345026,
-0.005505692,
0.005277195,
0.009263201,
-0.026592711,
-0.03542793,
-0.011069415,
0.002656732... | predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'POSITIVE'
Tokenize the text and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model")
inputs = tokenizer(text, return_tensors="tf")
Pass your input... |
[
0.011619544,
-0.026763886,
-0.007903475,
-0.031887144,
-0.023703596,
0.0028912239,
-0.0029578262,
-0.002380606,
0.0023771906,
0.055112567,
-0.003722899,
-0.026285715,
0.0018016782,
-0.02942798,
-0.0056868135,
-0.00024548933,
-0.014427087,
-0.025684588,
-0.054019608,
0.0147823,
... | Inference
Great, now that you've finetuned a model, you can use it for inference!
Grab some text you'd like to run inference on:
text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three."
The simplest way to try out your finetuned m... |
[
0.04654705,
-0.00866644,
-0.031875238,
0.03567265,
-0.025603756,
0.0025765577,
0.007393444,
0.040592022,
0.018598683,
0.03532743,
0.010025741,
0.016311606,
0.010471649,
-0.057766676,
-0.010874405,
0.0203967,
0.004157014,
-0.027890833,
-0.041253693,
0.0054767583,
0.0009133924,... | You can also manually replicate the results of the pipeline if you'd like:
Tokenize the text and return PyTorch tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model")
inputs = tokenizer(text, return_tensors="pt")
Pass your inputs to the model and return... |
[
0.033817943,
-0.01751387,
-0.015783584,
-0.02765644,
0.011162454,
-0.022620322,
-0.015206821,
-0.022437446,
-0.017302858,
0.08817424,
-0.0062599774,
-0.013265525,
0.06656677,
-0.0362938,
0.01659949,
0.013258492,
0.0025831198,
-0.0048884093,
-0.051796038,
-0.024885168,
0.00561... |
Knowledge Distillation for Computer Vision
[[open-in-colab]]
Knowledge distillation is a technique used to transfer knowledge from a larger, more complex model (teacher) to a smaller, simpler model (student). To distill knowledge from one model to another, we take a pre-trained teacher model trained on a certain task ... |
[
0.023344751,
0.009143243,
-0.010152054,
0.012034695,
-0.0010558775,
-0.003102806,
0.015302676,
0.01803073,
0.0023053475,
0.042228,
0.01905375,
-0.008986948,
0.0040920805,
-0.030264346,
-0.013434243,
0.012986672,
0.013661581,
-0.013206906,
-0.03554995,
-0.023344751,
0.01289431... | Pass your inputs to the model and return the logits:
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model")
with torch.no_grad():
logits = model(**inputs).logits
Get the class with the highest probability, and use the mo... |
[
0.018185101,
-0.013106969,
-0.017048532,
-0.027510794,
-0.021682236,
-0.013245398,
0.0039852764,
-0.009493263,
-0.0013096042,
0.036632486,
-0.00027344134,
0.010316548,
0.06662042,
-0.055109017,
0.021900807,
0.0315325,
-0.0032202783,
-0.010848403,
-0.043014757,
-0.033572495,
0... | pip install transformers datasets accelerate tensorboard evaluate --upgrade
In this example, we are using the merve/beans-vit-224 model as teacher model. It's an image classification model, based on google/vit-base-patch16-224-in21k fine-tuned on beans dataset. We will distill this model to a randomly initialized Mobil... |
[
0.023974769,
0.0011321619,
0.007743807,
-0.02409024,
-0.013170967,
-0.016931007,
-0.02339741,
0.003251244,
-0.021015812,
0.05158689,
0.0012864246,
-0.021607602,
0.03640239,
-0.025937784,
-0.0041461485,
0.043936905,
0.0017861996,
-0.03625805,
-0.068416856,
-0.02937306,
0.00696... |
We can use an image processor from either of the models, as in this case they return the same output with same resolution. We will use the map() method of dataset to apply the preprocessing to every split of the dataset.
thon
from transformers import AutoImageProcessor
teacher_processor = AutoImageProcessor.from_pre... |
[
0.02120385,
0.017419472,
-0.01457056,
0.021714102,
0.0005864367,
-0.02637725,
0.017915552,
0.020736117,
-0.0037631164,
0.036142927,
0.016994262,
-0.0023864962,
0.016469834,
-0.03293967,
-0.006441945,
0.011161786,
0.013762659,
-0.021232197,
-0.029849805,
-0.027440276,
0.007207... | Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model")
logits = model(**inputs).logits
Get the class with the highest probability, and use the model's id2label mapping ... |
[
0.012178942,
0.0010617089,
-0.0035466342,
-0.00079233316,
0.004959321,
-0.038776066,
-0.0033307825,
-0.009862486,
-0.034058917,
0.052730955,
0.020609437,
0.010199425,
0.052029,
-0.032374226,
0.023950748,
0.00094676355,
-0.007124856,
-0.012473763,
-0.047789183,
-0.023501497,
0... | with torch.no_grad():
teacher_output = self.teacher(**inputs)
# Compute soft targets for teacher and student
soft_teacher = F.softmax(teacher_output.logits / self.temperature, dim=-1)
soft_student = F.log_softmax(student_output.logits / self.temperature, dim=-1)
# Compute the loss
distillati... |
[
0.056382388,
-0.0022543145,
-0.02182252,
-0.015604761,
-0.007670329,
-0.0050066533,
-0.022969484,
-0.024599383,
-0.044248704,
0.0951981,
0.03422785,
0.0058744233,
0.044399623,
-0.05354516,
0.00820231,
0.01943804,
-0.008964438,
-0.018079791,
-0.0460597,
-0.014450251,
0.0366727... |
Essentially, we want the student model (a randomly initialized MobileNet) to mimic the teacher model (fine-tuned vision transformer). To achieve this, we first get the logits output from the teacher and the student. Then, we divide each of them by the parameter temperature which controls the importance of each soft t... |
[
0.019691171,
0.011827724,
-0.0054581147,
0.016247747,
-0.022020545,
-0.044098962,
0.03475253,
-0.0044706627,
-0.023829065,
0.034144867,
-0.009708137,
0.016508175,
0.009954097,
-0.038803615,
-0.0041957675,
-0.008391535,
0.006564929,
-0.014149864,
-0.0560786,
0.0064745033,
0.02... | # Compute the true label loss
student_target_loss = student_output.loss
# Calculate final loss
loss = (1. - self.lambda_param) * student_target_loss + self.lambda_param * distillation_loss
return (loss, student_output) if return_outputs else loss
We will now login to Hugging Face Hub so we can push ou... |
[
0.003529035,
-0.005863459,
-0.031067198,
0.01906994,
-0.051057756,
0.0387244,
0.009008903,
-0.0061118794,
0.024272162,
0.025967268,
0.024579035,
0.024827456,
0.01696567,
-0.029795868,
-0.0025335257,
0.026434883,
-0.005136463,
-0.0319732,
-0.075110726,
-0.021203436,
0.02340999... | We can use compute_metrics function to evaluate our model on the test set. This function will be used during the training process to compute the accuracy & f1 of our model.
thon
import evaluate
import numpy as np
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
... |
[
0.028215209,
0.008830606,
0.01021929,
-0.03828495,
-0.025694214,
-0.011921318,
0.0010379523,
0.0180885,
-0.03492362,
0.04620401,
0.01937036,
0.005052673,
0.030821662,
-0.020167964,
0.008538626,
0.013559253,
-0.051217515,
-0.0367752,
-0.09776335,
0.0036337231,
-0.023999307,
... |
Let's initialize the Trainer with the training arguments we defined. We will also initialize our data collator.
thon
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
trainer = ImageDistilTrainer(
student_model=student_model,
teacher_model=teacher_model,
training_args=trai... |
[
0.04285902,
0.007490716,
0.002573316,
-0.019891437,
-0.012933131,
-0.0023773594,
0.010655599,
-0.009346757,
-0.011764786,
0.04182378,
0.024032405,
0.022376018,
0.05593265,
-0.05288608,
-0.012592981,
0.04380553,
-0.018708304,
-0.011158431,
-0.0707514,
-0.034429193,
0.004965464... |
Let's set the TrainingArguments, the teacher model and the student model.
thon
from transformers import AutoModelForImageClassification, MobileNetV2Config, MobileNetV2ForImageClassification
training_args = TrainingArguments(
output_dir="my-awesome-model",
num_train_epochs=30,
fp16=True,
logging_dir=f... |
[
0.017701676,
-0.0020770645,
-0.041561633,
-0.018719014,
-0.014188469,
-0.019899124,
0.013523809,
-0.02410412,
0.032039355,
0.07861985,
0.02834981,
0.046471983,
0.030845677,
-0.011482351,
0.009081434,
0.005585184,
-0.013442421,
-0.004625495,
-0.03594593,
-0.015612742,
0.015639... |
We can now train our model.
python
trainer.train()
We can evaluate the model on the test set.
python
trainer.evaluate(processed_datasets["test"])
On test set, our model reaches 72 percent accuracy. To have a sanity check over efficiency of distillation, we also trained MobileNet on the beans dataset from scratch with... |
[
0.020866727,
-0.011468077,
0.005992717,
-0.026715733,
0.019357769,
-0.05320153,
0.002931689,
-0.0010697431,
-0.041273583,
0.04799922,
0.018639218,
0.0023963682,
0.03931912,
0.003811914,
-0.014773413,
0.016972179,
0.008586687,
0.023252318,
-0.06478458,
-0.020737387,
0.01279739... |
Zero-shot image classification
[[open-in-colab]]
Zero-shot image classification is a task that involves classifying images into different categories using a model that was
not explicitly trained on data containing labeled examples from those specific categories.
Traditionally, image classification requires training a ... |
[
0.032919493,
-0.007968046,
-0.029069921,
-0.051318176,
-0.012815392,
-0.026055368,
-0.024441944,
0.02960773,
-0.016488053,
0.0569227,
0.0013737095,
-0.03424986,
0.041864086,
-0.010480174,
-0.00071648724,
0.056894396,
0.0071330285,
-0.013324894,
-0.04449651,
-0.0076779127,
0.0... | create a zero-shot image classification pipeline
run zero-shot image classification inference by hand
Before you begin, make sure you have all the necessary libraries installed:
pip install -q transformers
Zero-shot image classification pipeline
The simplest way to try out inference with a model supporting zero-shot ... |
[
0.0401952,
-0.01724532,
-0.0058508296,
-0.019219976,
-0.015387682,
-0.04350092,
0.01840086,
0.027469644,
0.005309628,
0.056606777,
0.025831413,
-0.035017215,
0.04294509,
0.008615347,
-0.0022818237,
0.020258497,
0.039902657,
-0.008600719,
-0.052569706,
-0.015914256,
-0.0023494... | Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options
include a local path to an image or an image url.
The candidate labels can be simple words like in this example, or more descriptive. |
[
0.0018059481,
0.013830166,
-0.002116372,
-0.028044505,
-0.008218521,
-0.038554333,
-0.009082907,
0.024614403,
-0.0072032115,
0.05109478,
0.04181979,
-0.027193839,
0.018001169,
0.012272901,
-0.03918547,
-0.008458628,
-0.0140222525,
-0.04083192,
-0.012087676,
-0.019537855,
0.00... | predictions = detector(image, candidate_labels=["fox", "bear", "seagull", "owl"])
predictions
[{'score': 0.9996670484542847, 'label': 'owl'},
{'score': 0.000199399160919711, 'label': 'seagull'},
{'score': 7.392891711788252e-05, 'label': 'fox'},
{'score': 5.96074532950297e-05, 'label': 'bear'}] |
[
0.025080852,
-0.018784467,
-0.0077022295,
-0.00571311,
0.015464283,
-0.019592078,
-0.0031108034,
-0.0033314012,
-0.016675701,
0.032424144,
0.0027668204,
-0.010356882,
0.031885736,
-0.030719183,
-0.005032622,
0.014222952,
0.023495538,
-0.012592771,
-0.06413041,
-0.008038735,
0... | Zero-shot image classification by hand
Now that you've seen how to use the zero-shot image classification pipeline, let's take a look how you can run zero-shot
image classification manually.
Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub.
Here we'll use the same checkpoint... |
[
0.025812374,
-0.021895321,
-0.009520017,
-0.023617104,
-0.02498018,
-0.038166158,
-0.007984762,
-0.001318239,
-0.0023782107,
0.072544396,
0.013329458,
-0.01280575,
0.074782714,
-0.042298432,
0.00029279254,
0.043532375,
-0.0058002514,
-0.027318934,
-0.051653445,
-0.030360747,
... | from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
processor = AutoProcessor.from_pretrained(checkpoint)
Let's take a different image to switch things up.
from PIL import Image
import requests
url = "https://unspl... |
[
0.034989186,
-0.006795635,
0.0015405908,
-0.011769225,
-0.020886855,
-0.05543164,
-0.03795186,
0.0015729951,
0.001725758,
0.063993774,
-0.005114317,
-0.010947083,
0.08046624,
-0.021553459,
0.0005379106,
0.032204274,
-0.013457949,
-0.031078456,
-0.051106136,
-0.008791737,
-0.0... | from transformers import pipeline
checkpoint = "openai/clip-vit-large-patch14"
detector = pipeline(model=checkpoint, task="zero-shot-image-classification")
Next, choose an image you'd like to classify.
from PIL import Image
import requests
url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b... |
[
0.013682657,
-0.030286413,
0.0049229953,
-0.012193043,
-0.0076805996,
-0.013457399,
-0.004650505,
0.011539065,
0.017090606,
0.062374897,
0.023572246,
-0.013239406,
0.042755578,
-0.04964414,
-0.016814481,
0.009773327,
0.0064162435,
-0.032146614,
-0.038308535,
-0.009954987,
0.0... | import torch
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits_per_image[0]
probs = logits.softmax(dim=-1).numpy()
scores = probs.tolist()
result = [
{"score": score, "label": candidate_label}
for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0])
]... |
[
0.040355884,
-0.0029330554,
0.010177795,
-0.02042961,
0.008564152,
-0.015514661,
0.003375327,
0.007853556,
-0.024737593,
0.042872574,
0.004230262,
-0.031858347,
0.009992744,
-0.0171135,
0.0023519958,
0.030851671,
0.0031384618,
-0.06969754,
-0.059512343,
-0.033161107,
-0.00843... | Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the
image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs.
candidate_labels = ["tree", "car", "bike", "cat"]
inputs = processor(images=image, text=candidate_la... |
[
0.018700402,
-0.0342389,
-0.02275065,
-0.023684165,
-0.015854688,
-0.008996368,
-0.01693877,
0.008740405,
-0.013611242,
0.030836089,
-0.021847248,
0.0040615406,
0.04264053,
-0.013859677,
0.011059134,
0.039839987,
-0.006628705,
-0.018158361,
-0.06793576,
-0.01539546,
-0.044778... |
Fine-tuning ViLT
ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for
Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier
head is placed on top (a linear layer on top of the final hid... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.