vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.007946642,
0.008102459,
-0.019647045,
0.017224807,
-0.025978861,
-0.0076704216,
-0.008584073,
0.030058421,
-0.0058997795,
0.028953541,
0.00978811,
0.01114088,
0.029321834,
-0.050966162,
-0.019930348,
0.018712146,
-0.028188623,
-0.04334532,
-0.07688837,
-0.009455229,
-0.0081... | import numpy as np
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
label_str = processor.batch_decode(pred.label_ids, group... |
[
0.02394787,
-0.02571535,
0.027511802,
0.023122082,
-0.019109039,
0.041839957,
0.0077725546,
-0.028989527,
-0.010387552,
0.0727853,
0.007439342,
0.0047519067,
0.014951121,
-0.048707042,
-0.047171365,
0.0044005844,
-0.011626235,
-0.029090941,
-0.05380665,
0.019934827,
-0.004849... | import torch
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Union
@dataclass
class DataCollatorCTCWithPadding:
processor: AutoProcessor
padding: Union[bool, str] = "longest" |
[
-0.0055029844,
0.019249655,
0.009560087,
-0.0028935627,
-0.047275312,
-0.019954612,
-0.026126588,
-0.002546479,
-0.014365307,
0.031766247,
0.033636544,
-0.013581222,
0.0073876665,
-0.042700283,
-0.0068445615,
0.047246538,
-0.0038197187,
-0.04733286,
-0.0738335,
-0.0066431453,
... | Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load Wav2Vec2 with [AutoModelForCTC]. Specify the redu... |
[
0.021530984,
-0.0015911262,
0.026872354,
0.016671091,
-0.04182819,
0.0027647228,
0.005518161,
-0.014903173,
-0.0297913,
0.03358292,
0.00037380183,
0.04917069,
-0.0059168832,
-0.046402205,
-0.03180748,
-0.004468695,
-0.03028782,
-0.047545712,
-0.07336484,
0.023607349,
-0.00843... | Now instantiate your DataCollatorForCTCWithPadding:
data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this t... |
[
0.044116586,
0.01644557,
-0.019368257,
0.016140215,
-0.030564623,
-0.010636547,
0.006205258,
-0.0044276533,
0.0102003245,
0.018699383,
0.011639858,
0.021927428,
0.008789874,
-0.044901785,
-0.0056527103,
0.027641935,
0.0063361246,
-0.026958521,
-0.08288218,
-0.0000111256495,
-... |
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will e... |
[
0.03331501,
0.031999946,
-0.05324557,
0.012295869,
-0.023115942,
-0.020281244,
-0.0120109385,
-0.00799268,
0.001641093,
0.016496776,
0.0476054,
0.026038311,
-0.006410948,
-0.0514337,
0.011557971,
0.05228119,
0.0059579806,
-0.03489309,
-0.06370766,
-0.02143558,
0.0009118706,
... | Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog post for English ASR and this post for mul... |
[
0.03307207,
0.016478164,
-0.050837815,
-0.0107997805,
-0.03399797,
0.00765316,
-0.033853296,
-0.005624132,
-0.011421871,
0.044269696,
0.06411873,
-0.0008201106,
-0.018937593,
-0.052139867,
0.009794309,
0.048696667,
-0.0048392857,
-0.035473626,
-0.03816453,
-0.016941115,
0.021... | trainer.push_to_hub()
For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog post for English ASR and this post for multilingual ASR.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an audio file you'd like to run infere... |
[
0.024504071,
0.044018757,
0.03631326,
-0.011189208,
-0.054440368,
-0.020666083,
-0.023736473,
0.0006734929,
-0.014717204,
0.033006687,
0.006576243,
0.0007131644,
0.02292459,
-0.058544062,
-0.0059378087,
0.043782573,
-0.018614236,
-0.028460149,
-0.036844674,
-0.03076294,
-0.03... | from transformers import AutoModelForCTC, TrainingArguments, Trainer
model = AutoModelForCTC.from_pretrained(
"facebook/wav2vec2-base",
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
)
At this point, only three steps remain: |
[
0.01862539,
0.041641552,
-0.017378975,
-0.007917561,
-0.035041224,
-0.0051095886,
-0.034587983,
-0.0033957695,
-0.003464818,
0.033624846,
0.05450228,
-0.010339569,
0.030962054,
-0.029205743,
-0.015041949,
0.048411846,
-0.03566443,
-0.030593794,
-0.057731625,
-0.039035417,
0.0... | from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
sampling_rate = dataset.features["audio"].sampling_rate
audio_file = dataset[0]["audio"]["path"]
The simplest way to try out your finetuned model ... |
[
0.019844806,
0.024137529,
-0.0456445,
0.008484269,
-0.034457404,
-0.009445434,
-0.01825491,
0.0092719905,
0.0070100008,
0.03350347,
0.046742976,
-0.012697496,
0.02157924,
-0.04327411,
-0.01239397,
0.059491057,
-0.0340238,
-0.053276006,
-0.03263625,
-0.03179794,
0.0014543944,
... | The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for automatic speech recognition with your model, and pass your audio file to it:
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_m... |
[
0.023567855,
0.0021025408,
-0.022500997,
0.008631848,
0.0042708945,
-0.017263696,
-0.037159897,
0.013986919,
0.024302186,
0.03189489,
0.03810206,
-0.016404668,
0.039653853,
-0.05985487,
-0.022265458,
0.019826926,
-0.018095015,
-0.056252494,
-0.04286828,
-0.025285913,
0.016085... | The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results!
You can also manually replicate the results of the pipeline if you'd like:
Load a processor to preprocess the audio file and transcription and return the input as PyTorch tensors: |
[
0.02683683,
0.03601108,
-0.02053489,
0.031181024,
-0.03709713,
-0.046185642,
0.010624698,
0.01406147,
-0.028451612,
0.028423032,
0.04072682,
0.02433606,
0.016433628,
-0.072765246,
-0.0070557427,
0.034096207,
-0.0330959,
-0.025364948,
-0.054673966,
-0.015047488,
0.005237326,
... |
training_args = TrainingArguments(
output_dir="my_awesome_asr_mind_model",
per_device_train_batch_size=8,
gradient_accumulation_steps=2,
learning_rate=1e-5,
warmup_steps=500,
max_steps=2000,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_stra... |
[
0.037392005,
0.02079275,
-0.03922666,
0.012289273,
-0.008037534,
-0.02198673,
-0.034101274,
0.0178806,
0.006785309,
0.033402357,
0.028990451,
-0.008532599,
0.025583236,
-0.07484225,
-0.007429622,
0.03284905,
-0.016803104,
-0.038615108,
-0.04647791,
-0.028291536,
-0.0010356333... | You can also manually replicate the results of the pipeline if you'd like:
Load a processor to preprocess the audio file and transcription and return the input as PyTorch tensors:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
inputs = processor(... |
[
0.025587266,
-0.007329914,
-0.026082974,
0.033387374,
-0.006837851,
-0.044001352,
-0.019580455,
0.019595034,
0.005820921,
0.057910327,
0.023910608,
0.00056222733,
-0.007399167,
-0.0307922,
-0.044642854,
0.012545779,
-0.0127498945,
-0.04216432,
0.008550959,
-0.01944924,
-0.017... | Get the predicted input_ids with the highest probability, and use the processor to decode the predicted input_ids back into text:
import torch
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] |
[
0.04649359,
0.011356346,
0.010294818,
0.012838479,
-0.026544875,
-0.015315378,
0.03426265,
0.015702602,
-0.016330171,
0.058110308,
0.03786784,
0.0038956073,
0.016049769,
-0.019387906,
-0.03153873,
0.020616341,
0.025690312,
-0.0149949165,
-0.03065746,
-0.004887034,
-0.01498156... | from datasets import load_dataset
eli5 = load_dataset("eli5_category", split="train[:5000]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
eli5 = eli5.train_test_split(test_size=0.2)
Then take a look at an example: |
[
0.03182295,
-0.00103594,
-0.018452793,
0.0027283872,
-0.030580528,
-0.023606021,
0.00043259762,
0.029225158,
-0.009085212,
0.03803506,
0.043936566,
0.022518901,
-0.015332619,
-0.06714727,
-0.016207961,
0.028829841,
0.01794453,
-0.009494646,
-0.027446235,
-0.019638741,
-0.0023... | Pass your inputs to the model and return the logits:
from transformers import AutoModelForCTC
model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
with torch.no_grad():
logits = model(**inputs).logits
Get the predicted input_ids with the highest probability, and use the processor to deco... |
[
0.04349841,
0.012691806,
-0.01722766,
-0.0161402,
0.0016124103,
-0.016927179,
0.04332671,
0.004575203,
0.0108173685,
0.009500969,
-0.030363036,
0.0098586865,
0.025054513,
-0.053829286,
-0.01352171,
0.035743102,
-0.0021248402,
-0.053943753,
-0.025326379,
0.009593976,
0.0212913... | Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_logi... |
[
-0.0014466622,
-0.043958098,
-0.042761363,
-0.028765408,
0.014718374,
-0.025569253,
0.035610147,
-0.03488043,
0.0011137292,
0.053473596,
0.01431703,
-0.017615346,
0.020928258,
-0.02113258,
0.012178962,
0.032632906,
-0.017148329,
-0.024124414,
-0.04203165,
-0.011434652,
0.0005... | While this may look like a lot, you're only really interested in the text field. What's cool about language modeling
tasks is you don't need labels (also known as an unsupervised task) because the next word is the label.
Preprocess
The next step is to load a DistilGPT2 tokenizer to process the text subfield:
from tra... |
[
0.044481575,
0.0139596425,
-0.008566461,
0.013389009,
-0.009672933,
-0.018663889,
0.059958268,
0.013973561,
-0.016130833,
0.044732098,
0.017954078,
0.0021120396,
0.0155602,
-0.028559508,
-0.033402935,
0.01661796,
0.012929719,
-0.027835779,
-0.04523314,
0.012205989,
0.00079810... | from huggingface_hub import notebook_login
notebook_login()
Load ELI5 dataset
Start by loading the first 5000 examples from the ELI5-Category dataset with the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datase... |
[
0.01780477,
-0.033221815,
-0.020207236,
-0.0034655188,
-0.017303642,
-0.0058366633,
0.01904285,
-0.015372828,
0.001634192,
0.072810896,
-0.006603094,
0.0070268414,
0.018659635,
-0.019647151,
-0.018468028,
0.0070415805,
-0.0058329785,
-0.026486069,
-0.0048086145,
0.017156253,
... | The next step is to load a DistilGPT2 tokenizer to process the text subfield:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
You'll notice from the example above, the text field is actually nested inside answers. This means you'll need to
extract the text sub... |
[
0.019911861,
0.015596468,
-0.051645957,
0.05122968,
-0.0072640134,
-0.012488275,
0.02035589,
0.011204758,
0.014375392,
0.075429186,
0.029250316,
0.041044798,
0.009858799,
-0.045818094,
-0.014708413,
0.0025236723,
-0.013098814,
0.0021282102,
-0.0019790446,
0.062441375,
-0.0128... |
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'text': ["The tax bil... |
[
0.038326707,
0.0028962956,
0.010122975,
0.016632611,
-0.0038628993,
-0.044766046,
-0.030903194,
0.01351136,
-0.021131711,
0.054607827,
0.017982341,
-0.004688906,
-0.006840038,
-0.010376049,
-0.035852205,
0.025152782,
-0.056407467,
-0.049799412,
-0.046846878,
-0.00036752902,
-... | def preprocess_function(examples):
return tokenizer([" ".join(x) for x in examples["answers.text"]])
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the datas... |
[
0.04297883,
-0.00040260612,
0.018105602,
0.042685863,
-0.019526511,
-0.02578144,
0.017944468,
0.047724962,
-0.02053726,
0.046318702,
0.023540212,
0.03917021,
-0.0023272876,
-0.0594145,
-0.042744454,
0.027568562,
-0.003951445,
-0.061289515,
-0.04866247,
0.0048413444,
0.0103418... | tokenized_eli5 = eli5.map(
preprocess_function,
batched=True,
num_proc=4,
remove_columns=eli5["train"].column_names,
)
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
concatenat... |
[
0.03132919,
-0.016790807,
0.03255778,
0.0263563,
-0.024337893,
-0.02685359,
0.013324412,
0.013865579,
0.018458186,
0.07991722,
0.025156958,
0.046013832,
0.0046986467,
-0.008687926,
-0.024308642,
0.012095816,
-0.009616686,
-0.046452615,
-0.019365007,
-0.0052836924,
-0.01232983... |
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead ... |
[
0.033411488,
0.016532926,
-0.049829204,
0.04824504,
-0.014567121,
-0.018851569,
0.016158488,
0.018808365,
-0.0035535712,
0.07137385,
0.035773337,
0.049685188,
0.0042160405,
-0.048216235,
-0.021991096,
-0.0003602626,
-0.012385293,
0.0030279162,
-0.0038308,
0.056252275,
-0.0213... |
eli5 = eli5.flatten()
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'ans... |
[
0.05146103,
-0.015499261,
0.016166832,
0.062055092,
0.0035519141,
-0.013482035,
0.008380921,
-0.0009278876,
-0.013503804,
0.047687802,
0.00023582677,
0.026775409,
-0.013053919,
-0.016326468,
-0.085274965,
0.002543301,
-0.023756826,
-0.06611857,
-0.021275204,
0.009636245,
0.00... | Apply the group_texts function over the entire dataset:
lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
Now create a batch of examples using [DataCollatorForLanguageModeling]. It's more efficient to dynamically pad the
sentences to the longest length in a batch during collation, instead of padd... |
[
0.033844393,
-0.031441588,
-0.016101727,
0.016629172,
-0.02408666,
-0.021976879,
0.006014339,
-0.0028093779,
0.0061242236,
0.055118013,
0.005567476,
0.021332225,
0.00923029,
-0.006490505,
-0.051367294,
0.013237407,
-0.034635562,
-0.06012874,
-0.025918067,
0.021464085,
-0.0193... | Each subfield is now a separate column as indicated by the answers prefix, and the text field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the... |
[
0.008886557,
-0.003947149,
0.032890476,
0.0060557,
-0.004592847,
-0.02172756,
0.0074237045,
-0.029592674,
-0.003675372,
0.035458677,
0.013935407,
0.020852037,
-0.0076608253,
-0.027389275,
-0.041587338,
0.027856221,
-0.013388204,
-0.06011924,
-0.03936935,
0.00957968,
-0.004330... | Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element:
from transformers import DataCollatorForLanguageModeling
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False... |
[
0.016837126,
0.01147921,
0.023786008,
-0.023629053,
-0.047657628,
0.009110598,
-0.0048299725,
-0.007237824,
-0.035129666,
0.06923198,
0.021303246,
-0.019690877,
0.023101108,
-0.029907303,
0.028494697,
0.056846704,
-0.0046230755,
-0.00771226,
-0.043234315,
0.01987637,
0.018264... | If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial!
You're ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM]:
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
model = AutoModelForCausalLM.from_pretrained("dist... |
[
0.016016213,
-0.0050246245,
0.025048448,
0.005938516,
-0.016058885,
-0.016656293,
0.0066141556,
-0.024209231,
-0.024066992,
0.049556382,
0.011841472,
0.014764503,
-0.011628112,
-0.03257294,
-0.042472836,
0.032430697,
0.0034297598,
-0.040936645,
-0.042216804,
0.004740145,
0.01... | Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element:
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
Train
If you aren... |
[
0.03108519,
0.032151133,
-0.012063877,
-0.003529134,
-0.05093477,
-0.023321094,
0.002875524,
0.017890548,
-0.011415669,
0.03385088,
0.019273393,
0.043646026,
0.00014157049,
-0.048399553,
-0.02483358,
0.03373564,
0.0021516914,
-0.025150482,
-0.046008386,
-0.0012937158,
0.00251... | training_args = TrainingArguments(
output_dir="my_awesome_eli5_clm-model",
evaluation_strategy="epoch",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_dataset["train"],
eval_dataset=lm_dataset[... |
[
0.04420743,
0.014836151,
-0.020885287,
-0.0029528958,
-0.023193134,
-0.014764479,
0.018835463,
0.0024655247,
0.011130697,
0.017918058,
0.02137266,
0.025802003,
0.0086221695,
-0.033599943,
-0.0033166325,
0.03491871,
0.019910546,
-0.023150131,
-0.07304834,
0.012341958,
0.005328... | At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model).
Pass th... |
[
0.019171365,
-0.0003020679,
-0.047796704,
-0.02079581,
-0.0534164,
-0.026166718,
0.003998913,
0.008436864,
-0.02035677,
0.028566798,
0.013515081,
0.018381095,
0.0075587863,
-0.0534164,
-0.006098982,
0.036293883,
0.004847721,
-0.05022605,
-0.06819738,
0.011810146,
0.016785922,... | Once training is completed, use the [~transformers.Trainer.evaluate] method to evaluate your model and get its perplexity:
import math
eval_results = trainer.evaluate()
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
Then share your model to the Hub with the [~transformers.Trainer.pu... |
[
0.027930848,
0.027426941,
-0.0052514314,
-0.00639962,
-0.026361538,
-0.009639022,
-0.007904142,
0.0017834711,
-0.05871237,
0.041061226,
0.0010905989,
-0.011978591,
0.03291233,
-0.05200321,
0.013965424,
0.04218422,
-0.0077241757,
-0.011892207,
-0.06403939,
0.021696799,
0.03795... | trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer, AdamWeightDecay
optim... |
[
0.011044914,
0.007794459,
-0.02086925,
0.02235517,
-0.021174394,
-0.014076461,
0.013811118,
0.015588918,
0.0067032347,
0.027781442,
0.011469463,
0.019409861,
0.041207813,
-0.05174194,
-0.037413403,
0.015920596,
-0.022023492,
-0.05081324,
-0.043808177,
-0.039960697,
-0.0114959... | Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
lm_dataset["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_test_set = model.prepare_tf_dataset(
lm_dataset["test"],
... |
[
0.039903834,
0.039123047,
-0.043919317,
0.0024434477,
-0.02587754,
-0.02460876,
0.008498039,
0.011432965,
-0.032876745,
0.035079684,
0.0055143144,
0.005503857,
-0.0015145193,
-0.058001384,
-0.020509623,
0.034243125,
0.019157188,
-0.055435937,
-0.08265197,
0.016173463,
-0.0053... | import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_clm-... |
[
0.029894175,
0.055321403,
-0.021933848,
0.009850188,
-0.020774161,
-0.0060812025,
0.003890681,
-0.0027023589,
-0.005558627,
0.045986634,
-0.019371081,
0.031841304,
0.0066109365,
-0.055464577,
-0.017438268,
0.055836823,
0.018970203,
-0.050224505,
-0.05844254,
0.007487861,
-0.0... | from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_clm-model",
tokenizer=tokenizer,
)
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune... |
[
0.026429124,
0.012838566,
-0.00051998586,
0.009529798,
-0.015682464,
-0.0087846415,
-0.0022850323,
0.0016877108,
-0.02582753,
0.045830537,
-0.0056057633,
-0.01871778,
0.05080736,
-0.042767875,
0.011943011,
0.04131858,
-0.017309504,
-0.015436359,
-0.0372168,
-0.010090374,
0.01... | from transformers import create_optimizer, AdamWeightDecay
optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
Then you can load DistilGPT2 with [TFAutoModelForCausalLM]:
from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
Conver... |
[
0.0062584374,
-0.030364236,
-0.016117046,
-0.026066357,
-0.046997584,
-0.020819595,
-0.010416775,
0.017051974,
0.0013221559,
0.05486773,
0.010012104,
-0.02443372,
0.0033664394,
0.001186103,
-0.006321231,
0.006380536,
-0.008421331,
-0.04627197,
-0.043062516,
0.0011983128,
0.00... | Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with a prompt you'd like to generate text from:
prompt = "Somatic hypermutation allows the immune system to"
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for... |
[
0.03730048,
0.031862535,
-0.041809328,
-0.020043883,
-0.024443427,
-0.019306071,
0.016368488,
0.020822685,
-0.05372362,
0.025823409,
0.016067898,
-0.0246757,
0.024798669,
-0.04719262,
-0.0047377073,
0.035251003,
-0.0036719793,
-0.037628394,
-0.090832815,
0.021246243,
-0.00198... | Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and ... |
[
0.0083234,
-0.023531675,
-0.013462008,
0.01524406,
-0.023560302,
-0.030745765,
-0.011866033,
-0.0006016215,
-0.00452312,
0.06172055,
-0.016560918,
-0.004351356,
0.026451664,
-0.038847305,
-0.030402238,
-0.030974785,
-0.0012685491,
-0.036328096,
-0.036671627,
-0.0012435002,
-0... | from transformers import pipeline
generator = pipeline("text-generation", model="username/my_awesome_eli5_clm-model")
generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is cause... |
[
0.028772322,
-0.016629875,
-0.023903612,
0.022055848,
-0.02545808,
-0.01951884,
0.005568955,
0.0031309333,
0.002645162,
0.02622065,
-0.027188525,
0.009268149,
-0.021278614,
-0.035048854,
-0.032027908,
-0.00923882,
0.013007671,
-0.026895229,
-0.044023708,
-0.008366264,
-0.0213... | Tokenize the text and return the input_ids as PyTorch tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_clm-model")
inputs = tokenizer(prompt, return_tensors="pt").input_ids
Use the [~transformers.generation_utils.GenerationMixin.generate] method to ge... |
[
0.00782758,
0.0032591,
-0.047553804,
0.01654482,
-0.014370891,
-0.010747676,
-0.030937236,
-0.012519822,
-0.016831808,
0.04649195,
0.01812325,
0.00896118,
0.004243825,
-0.038628496,
-0.0058222557,
0.011364698,
0.0024447734,
-0.018654177,
-0.04873045,
-0.0053379647,
0.01043199... | model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding
PyTorch noteb... |
[
-0.0014458655,
0.0013534778,
-0.025173765,
0.012365152,
-0.036511566,
-0.027997129,
-0.01844795,
0.019231396,
-0.018787935,
0.049756248,
0.010251325,
0.008152279,
0.008211407,
-0.033466473,
-0.013385111,
-0.027036298,
-0.020724379,
-0.019083576,
-0.03423514,
-0.0040539666,
0.... |
tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in so... |
[
0.026338981,
0.0011240238,
-0.02279194,
0.014737811,
-0.027526215,
-0.0277021,
0.0037705638,
0.0031531295,
-0.015360742,
0.049072295,
-0.030281767,
0.009146091,
-0.00708309,
-0.0358515,
-0.026177753,
-0.012971619,
0.0051739905,
-0.04130398,
-0.06372949,
-0.009812993,
-0.01868... | from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_clm-model")
inputs = tokenizer(prompt, return_tensors="tf").input_ids
Use the [~transformers.generation_tf_utils.TFGenerationMixin.generate] method to create the summarization. For more details about the differen... |
[
0.01468871,
0.012847205,
-0.0153964255,
0.02360015,
-0.035530213,
-0.04315621,
0.005484796,
0.011937285,
-0.026864307,
0.040354233,
0.01769289,
0.013294944,
-0.0040007597,
-0.040527552,
-0.01905055,
0.0006865022,
0.018198403,
-0.04165412,
-0.042347394,
0.006329722,
0.00392132... | from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained("username/my_awesome_eli5_clm-model")
outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Decode the generated token ids back into text: |
[
0.010245017,
-0.00015294334,
-0.017455563,
0.030315232,
-0.0068128253,
-0.03225965,
-0.011482373,
0.017087303,
-0.018575076,
0.057802223,
-0.01411912,
-0.017794363,
0.015791025,
-0.015732104,
-0.041922815,
-0.02582245,
-0.023892764,
-0.042512033,
-0.033261318,
-0.0027490673,
... |
tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases... |
[
0.022445254,
0.012252093,
0.0006339008,
0.020908305,
-0.034392856,
-0.02679511,
0.0046108468,
0.00255554,
-0.032884907,
0.053793214,
0.024272194,
0.011273376,
-0.017935904,
-0.04393354,
-0.029057035,
0.012447837,
0.018805875,
-0.037785746,
-0.047123436,
0.003099272,
-0.000380... | from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("username/my_awesome_eli5_clm-model")
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Decode the generated token ids back into text: |
[
0.025891703,
0.0057835015,
-0.008815691,
-0.020618886,
-0.061946027,
-0.042820893,
-0.002690669,
0.01253092,
-0.022878665,
0.04841289,
0.013418235,
0.030360192,
-0.035901118,
-0.010417963,
-0.038224734,
-0.015843986,
-0.028266385,
-0.015882287,
-0.04981727,
-0.0041142018,
0.0... | Basics of prompting
Best practices of LLM prompting
Advanced prompting techniques: few-shot prompting and chain-of-thought
When to fine-tune instead of prompting |
[
0.02828367,
0.0124590015,
-0.0004475921,
0.014670079,
-0.039029654,
-0.018835656,
-0.014006001,
-0.0026600838,
-0.0053616725,
0.012542011,
0.010542233,
-0.0077349953,
-0.00084754796,
-0.019967606,
-0.009221624,
-0.023393644,
-0.0050975503,
-0.034803703,
-0.04183689,
0.005471094... |
Prompt engineering is only a part of the LLM output optimization process. Another essential component is choosing the
optimal text generation strategy. You can customize how your LLM selects each of the subsequent tokens when generating
the text without modifying any of the trainable parameters. By tweaking the tex... |
[
0.03065991,
0.009861088,
0.006190708,
0.004291286,
0.0005876432,
-0.007512045,
0.022462728,
0.0030387691,
-0.007163359,
0.0030785315,
-0.0060408344,
0.01824179,
0.017617825,
-0.016051797,
-0.045781877,
-0.03063544,
-0.0003098795,
-0.03406113,
-0.026304392,
-0.03310683,
0.0003... | Generation with LLMs
Text generation strategies |
[
0.04725437,
0.010303247,
-0.02803859,
0.008448962,
-0.039029717,
-0.025571194,
0.0033515461,
0.006912447,
-0.010976173,
0.035231423,
-0.011649099,
0.014595021,
-0.03185184,
-0.018901749,
-0.007940529,
0.00011454932,
-0.022356102,
-0.039029717,
-0.03708571,
-0.013862279,
0.002... |
LLM prompting guide
[[open-in-colab]]
Large Language Models such as Falcon, LLaMA, etc. are pretrained transformer models initially trained to predict the
next token given some input text. They typically have billions of parameters and have been trained on trillions of
tokens for an extended period of time. As a res... |
[
0.019883133,
-0.0036054414,
-0.028237514,
0.0009387855,
-0.009847779,
0.0048697805,
-0.011067027,
-0.0013635386,
0.010691874,
0.03861195,
-0.007748362,
-0.023129655,
0.020215001,
-0.03941997,
0.03748649,
0.04975112,
-0.017156057,
-0.03552415,
-0.051713463,
-0.029781414,
-0.00... | pip install -q transformers accelerate
Next, let's load the model with the appropriate pipeline ("text-generation"):
thon
from transformers import pipeline, AutoTokenizer
import torch
torch.manual_seed(0) # doctest: +IGNORE_RESULT
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pi... |
[
0.06629175,
0.016329218,
0.003297384,
0.009813301,
-0.027138902,
-0.022106808,
0.025303837,
0.008931609,
0.008702226,
0.055453394,
-0.003716725,
0.015870452,
-0.008472843,
-0.021246621,
-0.03713141,
0.0007822326,
-0.025920304,
-0.037704866,
-0.062162854,
-0.008996123,
0.03457... |
Basics of prompting
Types of models
The majority of modern LLMs are decoder-only transformers. Some examples include: LLaMA,
Llama2, Falcon, GPT2. However, you may encounter
encoder-decoder transformer LLMs as well, for instance, Flan-T5 and BART.
Encoder-decoder-style models are typically used in generative tasks w... |
[
0.013283483,
-0.014050393,
-0.032007694,
-0.024193881,
-0.021835268,
0.0077848732,
-0.026103923,
0.016032787,
0.0020981536,
0.06389963,
-0.007821049,
-0.0055528716,
0.013666938,
-0.02360061,
-0.013681408,
0.015294815,
-0.007698053,
-0.036695983,
-0.048272002,
-0.0019878196,
0... | from transformers import pipeline
import torch
torch.manual_seed(0) # doctest: +IGNORE_RESULT
generator = pipeline('text-generation', model = 'openai-community/gpt2')
prompt = "Hello, I'm a language model"
generator(prompt, max_length = 30)
[{'generated_text': "Hello, I'm a language model expert, so I'm a big believer ... |
[
0.039822917,
-0.007591611,
-0.028677788,
-0.016152363,
-0.03306829,
-0.04616639,
-0.010998291,
0.024859956,
0.004397848,
0.023553083,
0.0016712189,
0.041261945,
0.010432959,
-0.016563514,
-0.008590121,
-0.007048304,
-0.031130008,
-0.026475191,
-0.067957394,
-0.038765673,
0.05... |
Base vs instruct/chat models
Most of the recent LLM checkpoints available on 🤗 Hub come in two versions: base and instruct (or chat). For example,
tiiuae/falcon-7b and tiiuae/falcon-7b-instruct.
Base models are excellent at completing the text when given an initial prompt, however, they are not ideal for NLP tasks ... |
[
0.012559296,
-0.026712762,
-0.046161648,
0.016773453,
-0.0339905,
-0.02352442,
0.012593952,
0.037844237,
0.0020620257,
0.039784964,
-0.012483053,
0.0045780437,
-0.015304042,
-0.02129258,
-0.047298364,
-0.01770223,
-0.021971837,
-0.051484793,
-0.014985208,
-0.002117475,
0.0026... | To run inference with an encoder-decoder, use the text2text-generation pipeline:
thon
text2text_generator = pipeline("text2text-generation", model = 'google/flan-t5-base')
prompt = "Translate from English to French: I'm very happy to see you"
text2text_generator(prompt)
[{'generated_text': 'Je suis très heureuse de vo... |
[
0.008374407,
-0.02459393,
-0.012369471,
-0.018227931,
0.020939643,
-0.0066197696,
0.01554522,
-0.009005207,
-0.016458793,
0.06357299,
-0.024057388,
-0.0008519419,
-0.015835242,
-0.018285936,
-0.011550156,
-0.032540552,
-0.021838713,
-0.030771416,
-0.0445765,
-0.029408308,
0.0... |
Now that we have the model loaded via the pipeline, let's explore how you can use prompts to solve NLP tasks.
Text classification
One of the most common forms of text classification is sentiment analysis, which assigns a label like "positive", "negative",
or "neutral" to a sequence of text. Let's write a prompt that... |
[
0.018683381,
-0.014404421,
0.013994883,
0.023018828,
-0.0070680515,
-0.031237822,
-0.015929595,
-0.0012983396,
-0.010139582,
0.06371838,
-0.031633236,
0.021338312,
0.008833299,
-0.03920262,
-0.025278345,
-0.051884156,
-0.016649816,
-0.031633236,
-0.042959064,
-0.027933277,
-0... |
torch.manual_seed(0) # doctest: +IGNORE_RESULT
prompt = """Classify the text into neutral, negative or positive.
Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and police... |
[
-0.005341857,
-0.021088662,
-0.001176916,
0.046052173,
-0.023778755,
-0.044212315,
0.028796546,
0.020322055,
-0.000953903,
0.047362376,
0.022161912,
-0.0015401832,
0.012509633,
-0.010363133,
-0.035598442,
-0.0042581535,
0.009826508,
-0.0374383,
-0.040616233,
-0.008892641,
0.0... | As a result, the output contains a classification label from the list we have provided in the instructions, and it is a correct one!
You may notice that in addition to the prompt, we pass a max_new_tokens parameter. It controls the number of tokens the
model shall generate, and it is one of the many text generation p... |
[
0.020316433,
0.0047455966,
-0.033517946,
0.014181204,
0.008164094,
0.012847157,
-0.006110911,
0.0010917304,
-0.0016145798,
0.02063605,
0.030321792,
-0.030294,
-0.0018760045,
0.007007224,
-0.04171678,
-0.016675595,
-0.008845014,
-0.023818308,
-0.024902223,
-0.054890502,
0.0278... | Named Entity Recognition
Named Entity Recognition (NER) is a task of finding named entities in a piece of text, such as a person, location, or organization.
Let's modify the instructions in the prompt to make the LLM perform this task. Here, let's also set return_full_text = False
so that output doesn't contain the pr... |
[
0.0028870683,
-0.018624311,
0.015097548,
0.029549774,
-0.015592796,
-0.0041533266,
-0.015847923,
0.004929965,
0.03517759,
0.026008002,
0.0028926963,
-0.008314157,
0.01454227,
-0.000994247,
-0.046223108,
-0.022091044,
-0.0120510245,
-0.016448224,
-0.04112056,
-0.04076038,
0.00... | torch.manual_seed(1) # doctest: +IGNORE_RESULT
prompt = """Return a list of named entities in the text.
Text: The Golden State Warriors are an American professional basketball team based in San Francisco.
Named entities:
"""
sequences = pipe(
prompt,
max_new_tokens=15,
return_full_text = False,
)
f... |
[
0.03680639,
0.017646324,
0.031676482,
0.029517997,
-0.022187553,
-0.041908268,
0.035853293,
0.026770832,
0.018122872,
0.03103174,
0.013245255,
0.0036652214,
0.026756817,
-0.053205278,
0.021921247,
0.04356217,
0.03484413,
-0.057185862,
-0.041740075,
-0.035376746,
0.039413393,
... | Note that Falcon models were trained using the bfloat16 datatype, so we recommend you use the same. This requires a recent
version of CUDA and works best on modern cards. |
[
0.026271846,
0.0045204596,
-0.0075746183,
0.000009667408,
-0.033823673,
0.009762673,
0.00092640927,
0.024372494,
0.00043661371,
0.051783953,
0.036102895,
0.012186247,
-0.010248907,
0.01000579,
-0.042667057,
-0.0063400394,
-0.005014291,
-0.036285233,
-0.050750703,
-0.049018495,
... | As you can see, the model correctly identified two named entities from the given text.
Translation
Another task LLMs can perform is translation. You can choose to use encoder-decoder models for this task, however, here,
for the simplicity of the examples, we'll keep using Falcon-7b-instruct, which does a decent job. On... |
[
0.024410147,
-0.022103565,
-0.002223446,
-0.0191885,
0.0078537,
-0.06373525,
0.0032670682,
0.0033678927,
-0.002639126,
0.052867427,
-0.007287667,
-0.017263988,
0.01162489,
-0.01590551,
-0.026674276,
-0.04335808,
-0.029801605,
-0.0384619,
-0.08807464,
0.007351346,
0.030707257,... |
Here we've added a do_sample=True and top_k=10 to allow the model to be a bit more flexible when generating output.
Text summarization
Similar to the translation, text summarization is another generative task where the output heavily relies on the input,
and encoder-decoder models can be a better choice. However, de... |
[
0.007386234,
-0.03038055,
-0.0035830697,
0.022173624,
-0.03327536,
-0.030037351,
-0.045481294,
0.02009951,
0.009781164,
0.05419556,
0.023606105,
-0.009758782,
0.0212634,
-0.046346754,
0.018428281,
-0.0007409549,
-0.009109689,
0.014377044,
-0.015488709,
0.004752557,
0.03455862... |
torch.manual_seed(4) # doctest: +IGNORE_RESULT
prompt = """Answer the question using the context below.
Context: Gazpacho is a cold soup and drink made of raw, blended vegetables. Most gazpacho includes stale bread, tomato, cucumbers, onion, bell peppers, garlic, olive oil, wine vinegar, water, and salt. Northern re... |
[
-0.008631265,
-0.052322973,
0.012132968,
0.021198329,
0.019592175,
-0.017465107,
-0.029417202,
0.008913427,
0.019867102,
0.06899224,
0.012849226,
0.010526815,
0.023716083,
-0.024179118,
-0.03524855,
-0.016003653,
-0.010353177,
-0.039647385,
-0.010881326,
-0.010765567,
-0.0052... | torch.manual_seed(2) # doctest: +IGNORE_RESULT
prompt = """Translate the English text to Italian.
Text: Sometimes, I've believed as many as six impossible things before breakfast.
Translation:
"""
sequences = pipe(
prompt,
max_new_tokens=20,
do_sample=True,
top_k=10,
return_full_text = False... |
[
0.029695792,
-0.039305717,
-0.04746298,
-0.026944112,
0.0009602943,
0.0110067185,
-0.022879448,
0.036959108,
-0.019806506,
0.032601118,
0.007249349,
0.021315042,
-0.043300543,
-0.018633202,
-0.04472527,
0.019555084,
-0.041540585,
-0.028550422,
-0.018214164,
-0.010140708,
0.00... | Question answering
For question answering task we can structure the prompt into the following logical components: instructions, context, question, and
the leading word or phrase ("Answer:") to nudge the model to start generating the answer:
thon |
[
-0.02009532,
0.0063423174,
-0.043020565,
0.00025101285,
-0.0013318866,
0.0008463867,
-0.028970992,
0.016136285,
0.01902338,
0.08907115,
0.014528373,
0.01307768,
0.032787103,
-0.010676532,
-0.02469752,
-0.021581745,
-0.024940494,
-0.02059556,
-0.046021998,
0.012034324,
-0.0101... |
torch.manual_seed(3) # doctest: +IGNORE_RESULT
prompt = """Permaculture is a design process mimicking the diversity, functionality and resilience of natural ecosystems. The principles and practices are drawn from traditional ecological knowledge of indigenous cultures combined with modern scientific understanding and... |
[
0.038005803,
-0.009357177,
-0.01264799,
0.014564768,
-0.050619442,
0.008656419,
0.004125539,
0.004269813,
-0.014922017,
0.035367656,
0.0141525585,
0.020761665,
-0.016117427,
0.0072480333,
-0.055483524,
0.0007346792,
-0.046359934,
0.00011861784,
-0.0035106589,
-0.012764783,
-0... | Correct! Let's increase the complexity a little and see if we can still get away with a basic prompt:
thon |
[
0.029722791,
-0.02217843,
-0.031143574,
-0.010506694,
-0.031797133,
-0.0015007026,
0.008901209,
0.02021775,
0.0113591645,
0.07615399,
0.008474974,
-0.0012227618,
-0.01720569,
-0.038219076,
-0.047084767,
0.004560715,
-0.01138758,
-0.019365279,
-0.048079316,
0.008510494,
0.0329... | Reasoning
Reasoning is one of the most difficult tasks for LLMs, and achieving good results often requires applying advanced prompting techniques, like
Chain-of-though.
Let's try if we can make a model reason about a simple arithmetics task with a basic prompt:
thon |
[
0.011535797,
-0.019413568,
0.013488115,
0.01073432,
-0.020194495,
-0.0015764105,
-0.030305443,
0.027688654,
0.027798258,
0.07579101,
0.0014419747,
-0.005689121,
0.03663506,
-0.0037642047,
-0.048116058,
-0.017221494,
-0.022235865,
-0.04123842,
-0.021112427,
-0.01622136,
0.0055... | torch.manual_seed(6) # doctest: +IGNORE_RESULT
prompt = """I baked 15 muffins. I ate 2 muffins and gave 5 muffins to a neighbor. My partner then bought 6 more muffins and ate 2. How many muffins do we now have?"""
sequences = pipe(
prompt,
max_new_tokens=10,
do_sample=True,
top_k=10,
return_ful... |
[
0.021208791,
-0.032746375,
0.015892453,
0.025097068,
0.0006981227,
-0.010024688,
-0.021675384,
0.014089706,
0.0020378113,
0.07985817,
0.013312051,
0.001724098,
0.03580044,
0.008978388,
-0.025323296,
-0.027217949,
-0.000514755,
-0.045047473,
-0.016571134,
-0.011523443,
-0.0148... | torch.manual_seed(5) # doctest: +IGNORE_RESULT
prompt = """There are 5 groups of students in the class. Each group has 4 students. How many students are there in the class?"""
sequences = pipe(
prompt,
max_new_tokens=30,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequence... |
[
0.054916605,
-0.007858461,
0.030094413,
0.01935048,
-0.06697147,
-0.018652268,
0.02921096,
0.055714563,
-0.014413116,
0.06765544,
0.031063363,
0.012154609,
-0.056996997,
0.0025256793,
-0.020219684,
0.016030405,
-0.00034376312,
-0.00659384,
-0.016058903,
-0.02998042,
0.0418357... | This is a wrong answer, it should be 12. In this case, this can be due to the prompt being too basic, or due to the choice
of model, after all we've picked the smallest version of Falcon. Reasoning is difficult for models of all sizes, but larger
models are likely to perform better.
Best practices of LLM prompting
I... |
[
0.011185022,
-0.016915664,
-0.012417571,
-0.025699342,
-0.009343283,
-0.030601202,
-0.010179149,
0.009562875,
0.009385785,
0.054997157,
0.0064602536,
0.004104953,
-0.02673355,
-0.0076928018,
0.007108404,
-0.007083611,
-0.03204626,
-0.046100143,
-0.06822935,
-0.023701763,
0.03... |
When choosing the model to work with, the latest and most capable models are likely to perform better.
Start with a simple and short prompt, and iterate from there.
Put the instructions at the beginning of the prompt, or at the very end. When working with large context, models apply various optimizations to prevent ... |
[
0.018635932,
-0.012393801,
0.0039387373,
0.012449471,
-0.018441085,
0.032484144,
-0.03340272,
0.0145441005,
0.022351986,
0.014391004,
0.023047876,
-0.016882291,
0.018733358,
-0.018065304,
-0.023200972,
-0.003660381,
-0.022282396,
-0.023562834,
0.0051321886,
-0.0060716397,
-0.... |
torch.manual_seed(0) # doctest: +IGNORE_RESULT
prompt = """Text: The first human went into space and orbited the Earth on April 12, 1961.
Date: 04/12/1961
Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Ric... |
[
0.049189165,
-0.0020842557,
0.002528142,
-0.01191734,
-0.044775877,
-0.03688457,
-0.0037702925,
0.006882976,
-0.025018375,
0.07645803,
0.010609428,
0.025909802,
-0.01817924,
-0.012143849,
-0.03437104,
0.009104233,
-0.0026633171,
-0.016907863,
-0.06371502,
-0.010149102,
0.0134... |
Advanced prompting techniques
Few-shot prompting
The basic prompts in the sections above are the examples of "zero-shot" prompts, meaning, the model has been given
instructions and context, but no examples with solutions. LLMs that have been fine-tuned on instruction datasets, generally
perform well on such "zero-s... |
[
0.06270754,
0.007733229,
-0.0017717286,
0.009882601,
-0.052306388,
-0.006241445,
-0.025371604,
0.01576707,
-0.017194973,
0.083028875,
0.04821807,
-0.0056853136,
-0.015436397,
-0.032826766,
-0.025281422,
0.009439199,
0.006872729,
-0.021057831,
-0.044911347,
-0.0060122283,
0.03... |
In the above code snippet we used a single example to demonstrate the desired output to the model, so this can be called a
"one-shot" prompting. However, depending on the task complexity you may need to use more than one example.
Limitations of the few-shot prompting technique:
- While LLMs can pick up on the patt... |
[
0.026599664,
-0.0030616461,
-0.016383851,
0.004054659,
-0.0226758,
-0.0060028224,
0.013588958,
0.006192132,
0.0073520807,
0.025126494,
0.055016696,
0.0022545005,
-0.006353905,
-0.03158366,
-0.032657556,
0.03464014,
0.024754759,
-0.009699515,
-0.02742574,
-0.0015919183,
0.0527... | Your domain is wildly different from what LLMs were pre-trained on and extensive prompt optimization did not yield sufficient results.
You need your model to work well in a low-resource language.
You need the model to be trained on sensitive data that is under strict regulations.
You have to use a small model due to ... |
[
0.030633397,
0.013956149,
0.0066870293,
-0.008072125,
-0.0027175918,
-0.022035288,
-0.0030174037,
0.01091946,
0.007132364,
0.032036025,
0.03764654,
-0.015232541,
0.0031997454,
-0.009390595,
0.00484608,
0.024307545,
-0.0054106377,
-0.020955265,
-0.063903734,
-0.019047689,
0.03... | In all of the above examples, you will need to make sure that you either already have or can easily obtain a large enough
domain-specific dataset at a reasonable cost to fine-tune a model. You will also need to have enough time and resources
to fine-tune a model.
If the above examples are not the case for you, optimi... |
[
0.021124776,
0.0031131944,
-0.031700384,
-0.028501261,
0.021177653,
0.023239898,
-0.0050729867,
-0.00029227507,
0.013133583,
0.054993156,
0.037649162,
-0.016233558,
0.01887746,
-0.03315453,
-0.028448384,
0.01318646,
0.0242578,
-0.022830091,
-0.03987004,
-0.026439019,
0.023768... |
Image Feature Extraction
[[open-in-colab]]
Image feature extraction is the task of extracting semantically meaningful features given an image. This has many use cases, including image similarity and image retrieval. Moreover, most computer vision models can be used for image feature extraction, where one can remove th... |
[
0.0419665,
0.0012818108,
-0.0063344985,
-0.0068303356,
-0.012165974,
0.016915949,
-0.028600458,
-0.0080483705,
0.030813761,
0.057258405,
0.010434138,
-0.009938301,
0.03561404,
-0.05033106,
-0.0073800683,
0.04078799,
-0.007947765,
-0.033831898,
-0.063927054,
-0.007919022,
0.00... | Let's see the pipeline in action. First, initialize the pipeline. If you don't pass any model to it, the pipeline will be automatically initialized with google/vit-base-patch16-224. If you'd like to calculate similarity, set pool to True.
thon
import torch
from transformers import pipeline
DEVICE = torch.device('cuda' ... |
[
0.01345864,
0.02516834,
0.018924078,
-0.025195668,
-0.021615807,
-0.048369125,
0.0011503039,
0.0060495567,
0.013233191,
0.08176295,
0.03735627,
-0.018418526,
0.031535577,
-0.029048802,
-0.040088985,
0.019183686,
0.025646567,
-0.017325439,
-0.011450092,
-0.022872858,
0.0226269... | To infer with pipe pass both images to it.
python
outputs = pipe([image_real, image_gen])
The output contains pooled embeddings of those two images.
thon
get the length of a single output
print(len(outputs[0][0]))
show outputs
print(outputs)
768
[[[-0.03909236937761307, 0.43381670117378235, -0.06913255900144577, |
[
0.03129932,
0.034644485,
-0.011520069,
-0.029950883,
0.011520069,
-0.008888022,
-0.036719006,
-0.0012811782,
0.012537881,
0.06052412,
0.049632892,
-0.025309144,
0.024855344,
-0.059435,
-0.014288257,
0.006845916,
0.016492434,
-0.025438802,
-0.037937786,
-0.016284982,
0.0351371... | Learn to build a simple image similarity system on top of the image-feature-extraction pipeline.
Accomplish the same task with bare model inference. |
[
0.056677945,
0.0121767875,
0.013037103,
-0.02192704,
0.013125341,
-0.06347224,
-0.021088783,
0.00793403,
0.02330943,
0.058442697,
-0.0049192454,
-0.02350061,
0.043942317,
-0.032236133,
-0.02216234,
-0.02173586,
0.022059398,
-0.019309325,
-0.03382441,
-0.007154598,
0.021941748... |
Image Similarity using image-feature-extraction Pipeline
We have two images of cats sitting on top of fish nets, one of them is generated.
thon
from PIL import Image
import requests
img_urls = ["https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png", "https://huggingface.co/datasets/... |
[
0.043116875,
0.010914583,
0.0065117027,
-0.035650503,
0.0035550762,
0.028597347,
0.017141309,
-0.027272208,
0.0142060565,
0.042803403,
0.0146335205,
-0.018167224,
0.018295463,
-0.06349266,
0.009254597,
0.05688122,
0.038642753,
-0.024693174,
-0.05619728,
-0.03174633,
-0.013301... | Getting Features and Similarities using AutoModel
We can also use AutoModel class of transformers to get the features. AutoModel loads any transformers model with no task-specific head, and we can use this to get the features.
thon
from transformers import AutoImageProcessor, AutoModel
processor = AutoImageProcessor.fr... |
[
0.018881394,
0.028366622,
-0.04726286,
0.00430472,
0.012090326,
-0.0057816845,
-0.012209077,
0.008616863,
0.011147741,
0.0524879,
-0.022622047,
0.0006893119,
0.043314394,
-0.03339869,
0.016877472,
0.01152626,
-0.0009268137,
-0.047440987,
-0.072616175,
-0.025383005,
0.03725809... |
TimeSformer
Overview
The TimeSformer model was proposed in TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Facebook Research.
This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification p... |
[
0.00024020456,
0.013159105,
-0.044069532,
0.004973261,
0.009322401,
0.008737698,
-0.02782924,
0.032953605,
0.01500519,
0.0665379,
-0.010301286,
0.0062083644,
0.046303228,
-0.042046066,
0.007482886,
0.048799716,
-0.013343056,
-0.05518546,
-0.06953368,
-0.024951711,
0.02606856,... | Video classification task guide
TimesformerConfig
[[autodoc]] TimesformerConfig
TimesformerModel
[[autodoc]] TimesformerModel
- forward
TimesformerForVideoClassification
[[autodoc]] TimesformerForVideoClassification
- forward |
[
0.07212114,
0.008571298,
-0.005333414,
0.022032166,
-0.0033579406,
0.0116636595,
-0.027532931,
-0.0139265405,
-0.00079719257,
0.058441993,
0.040775515,
-0.003272446,
0.020766117,
-0.055473328,
-0.025131803,
-0.0062174653,
0.0012669587,
-0.007654504,
-0.02091164,
-0.01951462,
... | To get the similarity score, we need to pass them to a similarity function.
thon
from torch.nn.functional import cosine_similarity
similarity_score = cosine_similarity(torch.Tensor(outputs[0]),
torch.Tensor(outputs[1]), dim=1)
print(similarity_score)
tensor([0.6043]) |
[
0.019047283,
-0.016153121,
-0.008867822,
-0.012560371,
-0.029682966,
-0.06512574,
-0.0114911,
0.0056635723,
-0.002001319,
0.058453485,
0.03812308,
-0.011826138,
0.03572791,
-0.021100283,
-0.059537016,
0.015426017,
0.005253685,
-0.035613857,
-0.054461543,
-0.002887032,
-0.0028... |
If you want to get the last hidden states before pooling, avoid passing any value for the pool parameter, as it is set to False by default. These hidden states are useful for training new classifiers or models based on the features from the model.
python
pipe = pipeline(task="image-feature-extraction", model_name="go... |
[
0.040956628,
0.010905694,
-0.01908851,
-0.044530403,
-0.0045381305,
0.017088898,
-0.012508222,
-0.010388064,
-0.0026537427,
0.048643082,
0.030689105,
-0.008537358,
0.033241805,
-0.042516608,
-0.02687424,
0.0050734878,
0.012252952,
-0.031284735,
-0.0083317235,
-0.040275905,
-0... |
Let's write a simple function for inference. We will pass the inputs to the processor first and pass its outputs to the model.
python
def infer(image):
inputs = processor(image, return_tensors="pt").to(DEVICE)
outputs = model(**inputs)
return outputs.pooler_output
We can pass the images directly to this functio... |
[
0.017722439,
-0.015396177,
0.0012320003,
-0.0138504375,
0.0005590871,
0.010460524,
0.01861009,
-0.0225127,
-0.024594093,
0.044229575,
0.009297393,
0.013100524,
0.0032540876,
-0.009909567,
-0.011669568,
0.000110179375,
-0.0037648703,
-0.030027136,
-0.057391316,
0.012182264,
0.... |
Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data.
Instead of a GPT2-based tokenizer, the tokenizer is now
sentencepiece-based tokenizer.
nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first
tr... |
[
0.032078084,
-0.007786873,
-0.05003305,
0.004820608,
0.016914895,
-0.017161228,
0.009750698,
-0.014109431,
-0.030682195,
0.06065276,
-0.003282735,
0.006158335,
0.036676306,
-0.04912983,
-0.020390932,
0.0069931317,
-0.0018338154,
-0.016682247,
-0.058572613,
-0.030627454,
0.008... | This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
[
-0.0011329605,
-0.031060189,
0.0013647405,
0.00047862143,
-0.012705224,
0.011533773,
0.014057413,
0.020791583,
0.009231035,
0.092966355,
0.033443253,
0.033630688,
0.046617057,
-0.0071759745,
-0.031060189,
0.04640285,
-0.021032566,
-0.067154266,
-0.060942233,
-0.019010978,
0.0... |
DebertaV2Model
[[autodoc]] DebertaV2Model
- forward
DebertaV2PreTrainedModel
[[autodoc]] DebertaV2PreTrainedModel
- forward
DebertaV2ForMaskedLM
[[autodoc]] DebertaV2ForMaskedLM
- forward
DebertaV2ForSequenceClassification
[[autodoc]] DebertaV2ForSequenceClassification
- forward
DebertaV2ForTokenClass... |
[
0.0029816772,
-0.026842313,
-0.03211259,
0.005393009,
0.030668678,
-0.0025936258,
-0.01075714,
-0.02902262,
-0.030177748,
0.0635321,
0.02948467,
0.014143112,
0.032314736,
-0.025383962,
0.0035429976,
-0.010944849,
-0.0081581,
-0.02721773,
-0.05948915,
-0.01661942,
0.03632881,
... |
DeBERTa-v2
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention ... |
[
0.0016117074,
-0.029418405,
-0.0012824339,
0.005783331,
-0.003881962,
-0.023093712,
0.0016265619,
0.02823005,
0.0010117529,
0.07822021,
0.028414905,
0.032059196,
0.05471718,
-0.00918995,
-0.03456795,
0.043678675,
-0.017231157,
-0.051442597,
-0.043837123,
-0.012893659,
0.02894... |
TFDebertaV2Model
[[autodoc]] TFDebertaV2Model
- call
TFDebertaV2PreTrainedModel
[[autodoc]] TFDebertaV2PreTrainedModel
- call
TFDebertaV2ForMaskedLM
[[autodoc]] TFDebertaV2ForMaskedLM
- call
TFDebertaV2ForSequenceClassification
[[autodoc]] TFDebertaV2ForSequenceClassification
- call
TFDebertaV2ForToke... |
[
-0.0019316945,
0.0012086836,
0.022901759,
-0.018840436,
0.003856058,
-0.01863517,
-0.002948858,
-0.024690501,
0.043985453,
0.054571286,
0.0014845091,
-0.0071659633,
0.027065715,
-0.04985018,
-0.017432902,
-0.0046587926,
0.00049895997,
-0.028883781,
-0.025452916,
-0.03398609,
... |
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijin... |
[
-0.0010195485,
-0.0037688834,
-0.007806733,
0.0035436247,
-0.020360712,
-0.0077932845,
-0.009238976,
0.022714162,
0.0055171615,
0.067295246,
0.008566561,
0.0071746632,
0.043491773,
-0.04335729,
-0.0058231098,
0.025363477,
-0.022256922,
-0.03596073,
-0.07089939,
0.011222598,
-... | DebertaV2Config
[[autodoc]] DebertaV2Config
DebertaV2Tokenizer
[[autodoc]] DebertaV2Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
DebertaV2TokenizerFast
[[autodoc]] DebertaV2TokenizerFast
- build_inputs_with_special_to... |
[
-0.012110517,
-0.028269755,
0.021826018,
0.004971799,
-0.06609107,
-0.013757092,
0.0127948085,
-0.008140206,
0.024605948,
0.017691763,
0.006907058,
0.012374255,
0.025489824,
-0.037236813,
-0.009102491,
0.04556235,
0.010798961,
-0.023850378,
-0.042853698,
-0.004201972,
0.01226... | Currently, following scales of pretrained Chinese-CLIP models are available on 🤗 Hub:
OFA-Sys/chinese-clip-vit-base-patch16
OFA-Sys/chinese-clip-vit-large-patch14
OFA-Sys/chinese-clip-vit-large-patch14-336px
OFA-Sys/chinese-clip-vit-huge-patch14 |
[
0.00681221,
-0.03560358,
0.0030407698,
-0.00883447,
-0.018023204,
-0.018790778,
-0.0011984102,
0.0056165676,
-0.0043360493,
0.05476338,
0.009173973,
-0.023573346,
0.032798983,
-0.010472943,
-0.003070292,
-0.009683228,
0.015853334,
-0.023853807,
-0.053789154,
0.0018359018,
0.0... |
Chinese-CLIP
Overview
The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.