vector listlengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
-0.029875543,
0.008060889,
0.003965049,
0.006959487,
-0.028306045,
-0.040091053,
-0.045460388,
0.0022974568,
0.04042147,
0.056254134,
-0.013237482,
0.005641246,
0.049618185,
-0.007124698,
0.012115427,
0.013925858,
0.025070675,
-0.010559697,
-0.06547838,
-0.038053457,
0.021078... | If you need to capture both streams at once, use the parent CaptureStd class:
thon
from transformers.testing_utils import CaptureStd
with CaptureStd() as cs:
function_that_writes_to_stdout_and_stderr()
print(cs.err, cs.out) |
[
0.03073535,
-0.0048731137,
-0.013004518,
0.020552294,
0.0012451325,
-0.033828933,
-0.017687865,
-0.0057610874,
0.05445284,
0.036807943,
-0.01634158,
0.020695517,
0.043023758,
-0.032253496,
0.00942398,
0.0017222393,
0.01798863,
-0.035404373,
-0.070407726,
-0.02536454,
-0.01312... |
Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit
from the context.
Capturing logger stream
If you need to validate the output of a logger, you can use CaptureLogger:
thon
from transformers import logging
from transformers.testing_utils import Capt... |
[
-0.0037958315,
-0.0011346606,
-0.0006090708,
-0.0021722105,
-0.02554179,
-0.04086141,
-0.045386422,
-0.005070195,
-0.0074485526,
0.045304645,
-0.0048419,
0.017132353,
0.019244934,
-0.022216177,
-0.0013382521,
-0.003357982,
0.033637747,
-0.0033545746,
-0.051274393,
-0.013023041,... | Here is a full test example:
thon
from transformers.testing_utils import CaptureStdout
msg = "Secret message\r"
final = "Hello World"
with CaptureStdout() as cs:
print(msg + final)
assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}"
If you'd like to capture stderr use the CaptureStderr class in... |
[
0.016968658,
0.010119516,
0.009581625,
-0.022806564,
-0.05754713,
0.0013115568,
0.04894088,
0.006490547,
0.006490547,
0.03606019,
0.009194344,
-0.0043102973,
0.05984213,
-0.033048004,
0.02290697,
-0.02600522,
-0.0056119924,
0.0022161095,
-0.03494138,
-0.026019564,
-0.04234275... | Testing with environment variables
If you want to test the impact of environment variables for a specific test you can use a helper decorator
transformers.testing_utils.mockenv
thon
from transformers.testing_utils import mockenv
class HfArgumentParserTest(unittest.TestCase):
@mockenv(TRANSFORMERS_VERBOSITY="error")... |
[
0.034197774,
0.029579611,
-0.024420718,
-0.013182218,
-0.055067185,
-0.009857435,
0.029389625,
0.0037522556,
-0.01625125,
0.05904231,
0.011267728,
-0.005611942,
0.053956486,
-0.015009022,
-0.009236322,
0.020708652,
-0.008293691,
-0.020913254,
-0.04510014,
-0.0060467217,
-0.01... | At times an external program needs to be called, which requires setting PYTHONPATH in os.environ to include
multiple local paths. A helper class transformers.test_utils.TestCasePlus comes to help:
thon
from transformers.testing_utils import TestCasePlus
class EnvExampleTest(TestCasePlus):
def test_external_prog(sel... |
[
0.004258165,
0.021788627,
-0.019912025,
-0.026215112,
-0.059936628,
-0.0033395619,
-0.007807233,
0.004666433,
0.001275838,
0.062056758,
-0.0036815759,
-0.00027441708,
0.048018064,
-0.022934642,
-0.026530268,
-0.011947216,
-0.020442057,
0.023779828,
-0.024782592,
0.0037245515,
... |
Depending on whether the test file was under the tests test suite or examples it'll correctly set up
env[PYTHONPATH] to include one of these two directories, and also the src directory to ensure the testing is
done against the current repo, and finally with whatever env[PYTHONPATH] was already set to before the test ... |
[
0.011908321,
0.013328962,
-0.016253812,
-0.014666037,
-0.027577164,
0.0041922857,
-0.01821764,
0.00045091446,
0.03988939,
0.05849701,
-0.030613435,
-0.025627263,
0.044290595,
0.018148001,
-0.060502622,
-0.002681113,
-0.031755522,
0.0100559145,
-0.06156114,
0.034262534,
0.0194... | Debugging tests
To start a debugger at the point of the warning, do this:
pytest tests/utils/test_logging.py -W error::UserWarning --pdb
Working with github actions workflows
To trigger a self-push workflow CI job, you must: |
[
-0.014141791,
0.027955655,
0.011682348,
-0.027204158,
-0.019156318,
-0.00047011205,
-0.028502198,
0.024826698,
0.043176867,
0.052358784,
0.003456882,
0.037547477,
0.016532915,
-0.07788232,
-0.007596942,
0.029349338,
-0.017789962,
-0.015890727,
-0.041564565,
-0.039870284,
0.02... | Testing Experimental CI Features
Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a
new CI feature is to be added, it should be done as following. |
[
0.020629479,
0.008505516,
-0.023282045,
0.009413731,
-0.013875523,
-0.0062313713,
0.005197014,
0.030764017,
0.03243629,
0.042210422,
-0.02378661,
-0.0000024355372,
0.07104268,
-0.005553813,
-0.008238817,
0.004825799,
0.0035571796,
-0.0072909566,
-0.05498311,
-0.016319057,
-0.... |
Create a new dedicated job that tests what needs to be tested
The new job must always succeed so that it gives us a green ✓ (details below).
Let it run for some days to see that a variety of different PR types get to run on it (user fork branches,
non-forked branches, branches originating from github.com UI direct... |
[
0.016200414,
0.016705325,
-0.01791711,
-0.019893475,
-0.032689348,
-0.024783893,
0.0027247136,
0.020398386,
0.030092666,
0.08996063,
0.009535596,
0.02352883,
0.009860181,
-0.030496595,
-0.052135617,
-0.0090811765,
-0.0008975686,
0.011807693,
-0.051933654,
0.0036263394,
0.0311... | That way experiments on CI functionality itself won't interfere with the normal workflow.
Now how can we make the job always succeed while the new CI feature is being developed?
Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and
Github Actions as of this ... |
[
-0.011174905,
-0.015784139,
0.011108586,
0.014099614,
-0.01851652,
-0.050403133,
-0.029870488,
-0.048705343,
0.0644099,
0.042391688,
0.0014101272,
0.04265697,
0.053347737,
-0.009569963,
-0.05159689,
0.01701769,
-0.02065202,
-0.027045261,
-0.023928225,
0.0020094933,
0.03032146... | set +euo pipefail at the beginning of the run command to suppress most potential failures in the bash script.
the last command must be a success: echo "done" or just true will do |
[
0.019010201,
0.0185643,
-0.01847512,
-0.056361753,
0.016632069,
-0.032372337,
-0.012864213,
0.009891547,
0.021789644,
0.058680434,
0.012641263,
-0.006647625,
0.027467437,
-0.0062723258,
-0.017211739,
0.012247385,
0.014536338,
0.007123252,
-0.06593374,
0.0045258845,
0.00722357... | Create a new branch on transformers origin (not a fork!).
The branch name has to start with either ci_ or ci- (main triggers it too, but we can't do PRs on
main). It also gets triggered only for specific paths - you can find the up-to-date definition in case it
changed since this document has been written here un... |
[
-0.016059542,
-0.00011479899,
0.006474487,
-0.011126265,
-0.02018351,
-0.023350378,
-0.03904397,
0.005967788,
0.04900905,
0.08568843,
0.012217076,
0.030261189,
0.02219623,
-0.034990378,
-0.052781142,
0.01766409,
0.007375285,
0.010450667,
-0.03434293,
-0.0013441598,
0.01587656... |
cmd_that_may_fail || true
Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs,
while removing set +euo pipefail or any other things you may have added to ensure that the experimental job doesn't
interfere with the normal CI functioning.
This whole proces... |
[
0.034030933,
0.018674437,
-0.028716352,
-0.03461818,
-0.017720163,
0.028158467,
-0.03144705,
0.001825052,
0.02212451,
0.076518215,
-0.027527178,
0.036614817,
0.021581307,
-0.08180343,
-0.034471367,
0.045834586,
0.000023570146,
0.016589712,
-0.06054511,
-0.010144683,
0.0060192... | Github Actions:
CircleCI:
DeepSpeed integration
For a PR that involves the DeepSpeed integration, keep in mind our CircleCI PR CI setup doesn't have GPUs. Tests requiring GPUs are run on a different CI nightly. This means if you get a passing CI report in your PR, it doesn’t mean the DeepSpeed tests pass.
To run DeepS... |
[
0.0158207,
0.0066631916,
0.012275407,
-0.0030145512,
-0.009430763,
-0.02527949,
-0.010369636,
-0.015456363,
0.043944836,
0.06485226,
0.008589982,
0.022392808,
0.06210571,
-0.054566704,
-0.047700323,
-0.021355845,
-0.013347403,
-0.01319326,
-0.035284787,
-0.019786386,
0.025797... | Here is an example:
yaml
- run:
name: run CI experiment
command: |
set +euo pipefail
echo "setting run-all-despite-any-errors-mode"
this_command_will_fail
echo "but bash continues to run"
# emulate another failure
false
# but the last command must be a suc... |
[
0.0245259,
0.02489939,
-0.035772122,
0.022464782,
-0.045870207,
-0.044569906,
0.0010694633,
-0.006553381,
0.0060138945,
0.05408701,
-0.0059343544,
0.02531438,
0.0046375114,
-0.041222323,
0.0150779635,
0.018328717,
-0.0010755153,
-0.022990437,
-0.06285713,
0.002087053,
-0.0231... |
Methods and tools for efficient training on a single GPU: start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both.
Multi-GPU training section: explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data,... |
[
0.041789625,
0.004722753,
-0.016597005,
-0.009901543,
-0.026173795,
0.009915363,
-0.011663507,
-0.01626534,
-0.00021042075,
0.047953043,
-0.031480413,
0.00580066,
0.022760421,
-0.055719502,
-0.020369679,
0.013314916,
-0.004574195,
-0.034299556,
-0.051628985,
0.012686136,
0.01... | RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
Any changes to the modeling or PyTorch examples code requires running the model zoo tests as well.
RUN_SLOW=1 pytest tests/deepspeed |
[
0.013930141,
-0.019823933,
-0.015805757,
-0.0013777352,
0.03888919,
-0.003301647,
0.021313187,
0.019655338,
0.03085284,
0.054175112,
0.010101635,
-0.015173527,
0.033241265,
-0.014204107,
0.012490061,
0.041333813,
0.012461961,
-0.034477625,
-0.062380057,
-0.025612352,
-0.00693... | Instantiating a big model
Troubleshooting performance issues
Contribute
This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to
make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there.
When making c... |
[
0.041843664,
0.022188928,
-0.03951574,
0.011882716,
-0.026284892,
-0.014932589,
0.019787338,
-0.006482821,
0.0084571345,
0.0406355,
-0.005941358,
0.007462611,
0.03279718,
-0.036569003,
0.040812306,
0.04691205,
0.003874959,
-0.029482102,
-0.055045042,
-0.0054625133,
-0.0116838... |
Performance and Scalability
Training large transformer models and deploying them to production present various challenges.
During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment
phase, the model can struggle to handle the required throughput in a produc... |
[
0.016519913,
0.005803589,
-0.05993687,
-0.0074511073,
0.017712688,
-0.026658488,
-0.03551483,
-0.015699882,
0.008997985,
0.031310305,
0.0433275,
-0.035306096,
0.06446941,
-0.05498686,
0.0075070187,
0.014499654,
-0.02527189,
-0.03691634,
-0.05000703,
-0.035157003,
0.0000762373... |
What 🤗 Transformers can do
🤗 Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks f... |
[
0.043344263,
0.023228997,
-0.06585683,
-0.0052320356,
-0.030393338,
-0.01530689,
-0.015031339,
0.010340073,
-0.0015077838,
0.07131275,
0.019412609,
0.01707042,
-0.009974967,
-0.048304193,
0.0022233569,
0.0454109,
-0.0033462294,
-0.021837462,
-0.028381811,
0.008004773,
0.00447... | Inference
Efficient inference with large models in a production environment can be as challenging as training them. In the following
sections we go through the steps to run inference on CPU and single/multi-GPU setups.
Inference on a single CPU
Inference on a single GPU
Multi-GPU inference
XLA Integration for TensorF... |
[
0.014405362,
0.027314445,
-0.0048885816,
0.0028128598,
-0.013928606,
-0.024629943,
-0.0088603245,
-0.00086136954,
-0.009337081,
0.03092312,
0.024336554,
-0.041455757,
0.036233447,
-0.049025174,
-0.012072926,
0.029969608,
-0.04521113,
-0.054100793,
-0.060731366,
-0.051577654,
... | from transformers import pipeline
classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er")
preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
preds
[{'score': ... |
[
0.022607468,
0.00019860597,
-0.05990776,
-0.013454894,
-0.026693318,
-0.010180802,
0.010647563,
0.024785686,
0.026936846,
0.042211432,
0.004944961,
-0.0074749407,
0.0013520849,
-0.02102454,
0.018264562,
0.028492715,
0.024028046,
-0.021376302,
-0.03173975,
0.016411047,
-0.0129... | Training and inference
Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it.
Instantiating a big model
Troubleshooting performance issues |
[
0.018307196,
-0.0050392887,
-0.021622322,
0.030724121,
0.0014355683,
-0.01220973,
-0.013238308,
0.007614432,
0.016072445,
0.015332461,
0.021104334,
-0.012631522,
0.026624613,
-0.033861652,
-0.015051268,
0.051473264,
-0.022184711,
-0.025943827,
-0.0272166,
0.0028933361,
-0.005... | from transformers import pipeline
transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small")
transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} |
[
-0.02499639,
0.01754004,
-0.02869521,
0.0018429885,
0.011478084,
-0.044826176,
-0.012300044,
-0.027550338,
0.012490856,
0.02884199,
0.026669666,
0.024967035,
0.03540299,
-0.031087702,
-0.004630864,
0.023161659,
-0.037780806,
-0.00829299,
-0.052928355,
-0.030999634,
0.01991785... | acoustic scene classification: label audio with a scene label ("office", "beach", "stadium")
acoustic event detection: label audio with a sound event label ("car horn", "whale calling", "glass breaking")
tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting)
music classificatio... |
[
0.040979493,
0.027048709,
-0.0005930044,
-0.07523185,
0.039746184,
-0.015836809,
-0.008415933,
-0.019789003,
0.032991014,
0.05830188,
0.016425433,
0.004919221,
0.028926702,
-0.047594514,
0.0031113022,
0.031673618,
0.009551138,
-0.047930874,
-0.025282834,
-0.004148403,
-0.0320... | Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things.
Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like... |
[
0.047323268,
-0.012915954,
0.017235365,
-0.024648122,
0.017094437,
-0.017305829,
-0.019123785,
-0.013240086,
0.039769582,
0.05721634,
0.027480753,
0.0016893618,
0.033146016,
-0.020166645,
-0.028523613,
0.008300597,
0.0053587467,
0.0077016572,
-0.031313967,
-0.0052636215,
-0.0... | Computer vision
One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a convolutional neural network (CNN). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particula... |
[
0.003124908,
0.040843472,
-0.06371695,
0.016694522,
0.019543076,
-0.013172799,
-0.02001075,
-0.037668962,
0.015093094,
0.012889361,
0.03721546,
0.004758222,
-0.026614863,
-0.051614128,
-0.0069654984,
0.017162194,
-0.035911642,
-0.02074769,
-0.021768067,
-0.030016124,
-0.01734... |
Automatic speech recognition
Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in "smart" technology products like speakers, phones, and cars. We can ask our v... |
[
0.008596935,
-0.0050694933,
0.0080058165,
-0.021614417,
0.010338169,
-0.03161205,
0.022513948,
0.010608028,
0.007164113,
0.03760035,
0.05813533,
-0.027448513,
0.04155828,
0.0054550064,
-0.01473944,
0.01774644,
-0.008988874,
0.0015372323,
-0.051247504,
-0.03559568,
0.003482465... | Image classification
Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: |
[
-0.023112342,
0.0028064493,
0.013579705,
0.0134076355,
-0.026237119,
-0.016339695,
-0.0023229348,
-0.0024554282,
0.008149199,
0.047849014,
0.0403055,
-0.0036168955,
0.04407726,
0.035790402,
-0.007350798,
0.018982679,
-0.01607815,
-0.028109232,
-0.014316161,
0.0014522645,
0.01... | healthcare: label medical images to detect disease or monitor patient health
environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires
agriculture: label images of crops to monitor plant health or satellite images for land use monitoring
ecology: label images of anima... |
[
0.011835846,
0.007819069,
-0.0033083174,
-0.03546204,
0.00087257684,
-0.018783962,
-0.030886294,
0.018744964,
0.034656085,
0.06681631,
0.034110118,
-0.0013104902,
0.01864097,
-0.008586026,
-0.03509806,
-0.009216491,
-0.0033196916,
-0.017991006,
-0.0253486,
-0.010002947,
-0.01... | Object detection
Unlike image classification, object detection identifies multiple objects within an image and the objects' positions in an image (defined by the bounding box). Some example applications of object detection include: |
[
-0.036464248,
0.016158134,
-0.02195988,
-0.014084145,
-0.04324199,
0.012430377,
-0.030201614,
-0.004744419,
0.06842807,
0.032316267,
0.028656289,
0.0030381223,
0.03207227,
-0.04863707,
-0.03147583,
-0.03456648,
-0.042808212,
-0.018232124,
-0.036084693,
-0.013616482,
-0.038362... | self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights
remote sensing: disaster monitoring, urban planning, and weather forecasting
defect detection: detect cracks or structural damage in buildings, and manufacturing defects |
[
0.031563796,
0.006746956,
-0.029873805,
-0.0305758,
0.04911368,
-0.039753743,
0.03372178,
0.0061944597,
0.0031216047,
0.0925074,
0.026428828,
0.004754719,
0.026857825,
-0.051271666,
-0.025999831,
-0.0106144305,
-0.022177855,
-0.006324459,
-0.024049843,
0.005843462,
-0.0071889... | Image segmentation
Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several typ... |
[
0.023285886,
-0.0018102942,
-0.009312996,
-0.028339766,
0.007478927,
-0.03855621,
-0.00350511,
-0.037795413,
0.021560503,
0.051055055,
0.043039493,
0.00724797,
0.011642943,
0.0048942477,
-0.016275667,
0.014835582,
-0.03266002,
-0.041327693,
-0.050375767,
-0.021995245,
-0.0084... | instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object ("dog-1", "dog-2")
panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class and each distinct instance of an object |
[
0.018804805,
0.022107184,
0.017679978,
-0.005663794,
-0.014766964,
-0.032619994,
0.0019017514,
0.017636716,
-0.01251731,
0.043003015,
0.015098644,
-0.021789925,
0.020174788,
-0.02956277,
-0.008710202,
0.010267654,
-0.010130657,
-0.0451373,
-0.0517709,
-0.024616415,
-0.0087967... |
from transformers import pipeline
classifier = pipeline(task="image-classification")
preds = classifier(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
)
preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
print(*preds, ... |
[
0.027355669,
0.02046511,
-0.0007442026,
-0.0024862082,
-0.0053486675,
-0.038480897,
0.013220431,
0.01634848,
0.00047123607,
0.055331044,
0.007584042,
-0.013404868,
0.03762511,
-0.049576618,
-0.009494808,
0.0056732767,
-0.013065504,
-0.052970253,
-0.055360556,
-0.024139091,
-0... | from transformers import pipeline
segmenter = pipeline(task="image-segmentation")
preds = segmenter(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
)
preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
print(*preds, sep="\... |
[
0.032258205,
-0.006808931,
-0.0329932,
-0.04004373,
0.015747992,
-0.006941639,
0.011855231,
0.028283775,
0.015394105,
0.06838195,
0.044807598,
-0.024200458,
-0.027984332,
-0.030298213,
-0.014087443,
0.041323166,
-0.023737682,
-0.0026966906,
-0.02923655,
-0.01416911,
0.0054750... |
Depth estimation
Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other veh... |
[
0.015281059,
0.014805355,
0.004493568,
-0.008021092,
-0.030679213,
-0.03433847,
-0.013509979,
0.024780491,
-0.01413937,
0.04113005,
0.009155462,
-0.020169826,
0.026185645,
-0.03571435,
-0.018954953,
0.0114681125,
-0.013495341,
-0.04262303,
-0.05105396,
-0.015368881,
-0.010824... | from transformers import pipeline
detector = pipeline(task="object-detection")
preds = detector(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
)
preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds]
pred... |
[
0.047229484,
-0.004049548,
-0.017929189,
-0.042332027,
-0.021799305,
-0.011645529,
-0.01846397,
0.037040524,
-0.007233598,
0.08595879,
0.010583007,
-0.02217928,
-0.018660992,
-0.03439477,
-0.007212488,
0.008739425,
-0.029975804,
-0.0015515644,
-0.02700637,
-0.016127827,
-0.00... | stereo: depths are estimated by comparing two images of the same image from slightly different angles
monocular: depths are estimated from a single image
from transformers import pipeline
depth_estimator = pipeline(task="depth-estimation")
preds = depth_estimator(
"https://huggingface.co/datasets/huggingface/docu... |
[
0.0077371583,
-0.03737952,
-0.0029947397,
-0.008871176,
0.032297973,
0.01810122,
0.00596077,
-0.00808167,
-0.009610441,
0.06763913,
-0.004532482,
0.0025569228,
0.0021998507,
-0.014426427,
-0.02985768,
0.0010748047,
-0.04312138,
-0.04231752,
-0.039188206,
-0.0205128,
-0.013629... |
Natural language processing
NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these ... |
[
0.040607337,
0.01868957,
-0.021153193,
0.0071678744,
0.004396296,
-0.023446914,
0.0071714143,
0.014810068,
0.03754904,
0.057258043,
0.019524936,
-0.020317826,
0.029138736,
-0.050914917,
0.001844179,
0.0029698007,
-0.0042228512,
-0.02280977,
-0.014909179,
-0.002120275,
-0.0067... | Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task's finer granularity can help identify abnormal cells or organ features. Image segmentation... |
[
0.013253395,
0.013698389,
-0.01123609,
-0.012074162,
-0.020929536,
-0.024400488,
0.01343881,
0.00129419,
-0.017221255,
0.029903576,
0.00020812725,
-0.043876376,
0.02745611,
-0.046220012,
-0.013490725,
-0.002111866,
-0.027278112,
-0.044380702,
-0.059243493,
-0.0076983906,
-0.0... | from transformers import pipeline
classifier = pipeline(task="sentiment-analysis")
preds = classifier("Hugging Face is the best thing since sliced bread!")
preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
preds
[{'score': 0.9991, 'label': 'POSITIVE'}] |
[
0.011747975,
-0.02522402,
-0.0040515265,
-0.0024160314,
0.0105209,
-0.0028317121,
0.03209274,
-0.011827844,
0.0034888147,
0.0023597602,
0.027808866,
-0.0039680274,
-0.0022199897,
0.014107735,
-0.03810468,
0.022508482,
-9.572344e-8,
-0.006040986,
-0.02719896,
-0.029362677,
0.0... | named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names.
part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or... |
[
0.011323365,
0.016988892,
0.00466618,
-0.0013126032,
-0.021355268,
-0.018464848,
0.0065072835,
-0.006526502,
-0.01987931,
0.03005727,
0.0033958564,
-0.024076564,
0.028012453,
-0.0456932,
-0.017527001,
-0.0034458237,
-0.03277857,
-0.034992505,
-0.04600069,
-0.031410232,
-0.014... |
from transformers import pipeline
classifier = pipeline(task="ner")
preds = classifier("Hugging Face is a French company based in New York City.")
preds = [
{
"entity": pred["entity"],
"score": round(pred["score"], 4),
"index": pred["index"],
"word": pred["word"],
"st... |
[
-0.0009933423,
-0.0011172948,
-0.00010151845,
-0.015838386,
0.022104878,
-0.001550268,
0.053051706,
-0.021278527,
0.0005057093,
0.052335534,
-0.013111428,
0.021388708,
-0.0031504615,
-0.023482129,
-0.003475837,
-0.027090525,
-0.010377586,
-0.009227581,
-0.024432432,
-0.02415698... | sentiment analysis: label text according to some polarity like positive or negative which can inform and support decision-making in fields like politics, finance, and marketing
content classification: label text according to some topic to help organize and filter information in news and social media feeds (weather, spo... |
[
0.0031605626,
-0.023477448,
-0.016789932,
0.013460403,
0.0018186129,
-0.012215387,
-0.02071707,
0.00018630778,
0.009853413,
0.07711986,
-0.010607538,
-0.0007487883,
-0.0038524359,
-0.016946448,
-0.027845677,
-0.014420845,
-0.031217892,
-0.003671019,
-0.029097807,
-0.015054024,
... | extractive: given a question and some context, the answer is a span of text from the context the model must extract
abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the [Text2TextGenerationPipeline] instead of the [QuestionAnsweringPipeline] shown bel... |
[
-0.018170414,
-0.008073398,
-0.07886494,
0.0074339905,
-0.01177634,
-0.008593355,
-0.016624594,
0.053935077,
-0.015978161,
0.04541901,
-0.018760636,
0.0458406,
-0.015149039,
-0.018072044,
-0.054665826,
-0.0068789003,
-0.024845548,
-0.013357293,
-0.023988321,
-0.012872467,
0.0... | Question answering
Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or... |
[
0.010586452,
-0.026325662,
-0.0009815593,
-0.02000805,
0.019679151,
-0.019487293,
0.006728735,
-0.0231463,
-0.015800878,
0.031163225,
0.014238606,
0.031656574,
0.00034817093,
-0.010641268,
-0.03253364,
0.010538487,
-0.045744434,
-0.030697282,
-0.028696477,
-0.022310346,
-0.01... | Token classification
In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as tokens. Token classification assigns each token a label from a predefined set of classes.
Two common types of token classification are: |
[
0.023080356,
0.012405325,
-0.04501696,
0.0035925587,
-0.030734707,
-0.022772422,
-0.012500638,
0.019341163,
-0.025162574,
0.03577895,
0.0015836585,
-0.035661645,
0.027464744,
-0.035221737,
-0.023373626,
0.006594911,
-0.024502717,
-0.04973861,
-0.056659784,
-0.014949443,
-0.00... | from transformers import pipeline
question_answerer = pipeline(task="question-answering")
preds = question_answerer(
question="What is the name of the repository?",
context="The name of the repository is huggingface/transformers",
)
print(
f"score: {round(preds['score'], 4)}, start: {preds['start']}, en... |
[
0.034067336,
-0.0078042704,
0.030619886,
0.0037426567,
0.036781866,
-0.009270116,
0.005588536,
-0.026358075,
0.011027773,
0.09663723,
0.0016312621,
0.016802933,
-0.026507374,
-0.041233692,
-0.04823718,
-0.034365933,
-0.044789724,
-0.038464874,
-0.04454542,
0.038166273,
-0.033... |
Summarization
Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to hel... |
[
0.0044854167,
0.00036302832,
-0.013438322,
0.027909258,
0.0044639036,
-0.03754699,
-0.015962489,
-0.031207886,
0.021799626,
0.054499064,
-0.006123974,
0.018644417,
-0.011301385,
-0.036198854,
-0.057367437,
-0.06023581,
-0.047442872,
-0.026202576,
-0.044029508,
0.019863475,
0.... | extractive: identify and extract the most important sentences from the original text
abstractive: generate the target summary (which may include new words not in the input document) from the original text; the [SummarizationPipeline] uses the abstractive approach |
[
0.014682135,
0.0039263642,
-0.03805888,
0.0022680014,
0.007896274,
-0.0039263642,
-0.01878268,
-0.024153309,
-0.009717933,
0.05126772,
-0.010247738,
-0.02075675,
0.022716304,
-0.033791408,
-0.0065282155,
-0.011612168,
-0.020074535,
-0.042297322,
-0.07060198,
0.010247738,
0.00... |
from transformers import pipeline
summarizer = pipeline(task="summarization")
summarizer(
"In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-atten... |
[
-0.005102097,
0.0044277213,
-0.008814571,
0.0054835626,
0.03299674,
-0.019005142,
0.012860827,
-0.015435717,
-0.0020435636,
0.063541204,
-0.014141461,
0.015503836,
-0.03741084,
-0.02693417,
-0.031225653,
-0.034604345,
-0.035449017,
-0.030108504,
-0.041961174,
0.0046933847,
-0... |
Translation
Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translat... |
[
0.019297386,
-0.0055440203,
-0.019673496,
0.028135993,
-0.010292421,
-0.01704072,
0.047621433,
-0.019904949,
-0.01855963,
0.02845424,
-0.02093202,
0.013525529,
0.012324867,
-0.050485663,
-0.029220928,
0.011088041,
-0.014827452,
-0.046956006,
-0.045683015,
0.012983061,
-0.0042... | from transformers import pipeline
text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning."
translator = pipeline(task="translation", model="google-t5/t5-small")
translator(text)
[{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage... |
[
0.026994156,
-0.010868226,
-0.035392646,
0.012839865,
0.040069234,
-0.03375999,
0.012286422,
-0.03400904,
-0.017765502,
0.055870015,
0.008550686,
-0.013649275,
-0.008336226,
-0.022110026,
-0.0055620964,
0.023327598,
-0.042061627,
-0.027616778,
-0.039709494,
-0.019218288,
0.00... |
Language modeling
Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-... |
[
0.031853117,
-0.0051333825,
0.0023847441,
0.00531003,
-0.02581883,
-0.040558316,
0.014025828,
-0.0075887856,
-0.025606852,
0.033351086,
-0.0018724657,
-0.007850224,
0.016505962,
-0.037505843,
-0.028574534,
0.011687012,
-0.030524725,
-0.05974932,
-0.029931188,
-0.016576622,
-0... |
text = "Hugging Face is a community-based open-source for machine learning."
fill_mask = pipeline(task="fill-mask")
preds = fill_mask(text, top_k=1)
preds = [
{
"score": round(pred["score"], 4),
"token": pred["token"],
"token_str": pred["token_str"],
"sequence": pred["sequenc... |
[
0.008609599,
-0.01910999,
-0.0065925135,
-0.02508547,
0.017854273,
-0.0039656116,
-0.016555255,
0.0038970523,
0.01690166,
0.08065455,
-0.0046439874,
0.012946874,
0.005264629,
-0.022718372,
-0.02053891,
0.0038212764,
-0.016021214,
-0.018835753,
-0.036892094,
-0.03458273,
-0.01... |
Multimodal
Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image.
A... |
[
0.0068998,
0.0015212971,
-0.04009242,
-0.0023049389,
0.005375688,
-0.0038440672,
-0.0053193783,
0.018079128,
0.05669999,
0.02654809,
0.029205902,
-0.013056315,
0.03264454,
-0.023559928,
0.005728561,
0.00854404,
0.0031120428,
-0.017733764,
-0.049131986,
0.011907599,
-0.0168928... | Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next section, you'll learn how 🤗 Transformers work to solve these tasks. |
[
0.012278539,
-0.0027851053,
-0.027883884,
-0.008659909,
-0.012446338,
-0.03157547,
0.010053373,
0.0011481491,
-0.042227086,
0.046837922,
-0.028905272,
-0.020369388,
-0.0009197049,
-0.050573282,
-0.005114234,
-0.0067995237,
-0.013920055,
-0.03831663,
-0.044532504,
0.017232269,
... | causal: the model's objective is to predict the next token in a sequence, and future tokens are masked
from transformers import pipeline
prompt = "Hugging Face is a community-based open-source platform for machine learning."
generator = pipeline(task="text-generation")
generator(prompt) # doctest: +SKIP
masked: the ... |
[
0.019142047,
0.012686835,
-0.02983101,
-0.020425877,
-0.018997796,
0.011814118,
-0.0011765434,
-0.005398576,
0.007118764,
0.03459128,
0.027364327,
0.020339325,
0.0066787996,
-0.014764043,
-0.009224822,
0.034100827,
0.008568482,
0.0007951811,
-0.03193707,
-0.00599361,
-0.00062... |
Hyperparameter Search using Trainer API
🤗 Transformers provides a [Trainer] class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The [Trainer] provides API for hyperparameter search. This doc shows how to enable it in example.
Hyperp... |
[
0.027632114,
-0.008369487,
0.001073494,
-0.024754396,
-0.038058188,
-0.0022486874,
0.0057140016,
0.010621941,
-0.011171871,
0.031097427,
0.050322387,
0.00048142442,
0.0024539698,
0.010667141,
-0.01737177,
0.036370732,
0.010056944,
-0.051316783,
-0.018336032,
0.012595663,
0.02... | pip install optuna/sigopt/wandb/ray[tune]
How to enable Hyperparameter search in example
Define the hyperparameter search space, different backends need different format.
For sigopt, see sigopt object_parameter, it's like following: |
[
0.038270008,
0.011875874,
0.03414314,
0.032035172,
-0.040496733,
-0.022415714,
0.00056688744,
0.024657285,
-0.021317195,
0.020218676,
0.05896372,
0.021346886,
0.0016988068,
-0.016819207,
-0.017888036,
0.038240317,
0.016388707,
-0.03405407,
-0.010873848,
-0.0040971767,
0.01362... | def sigopt_hp_space(trial):
return [
{"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"},
{
"categorical_values": ["16", "32", "64", "128"],
"name": "per_device_train_batch_size",
"type": "categorical",
},
]
For o... |
[
0.05859272,
0.0120159695,
-0.008127137,
-0.011770593,
-0.017101936,
-0.015302515,
-0.006677191,
0.002316197,
-0.004215999,
0.050056625,
0.018678289,
-0.0064875823,
0.06579041,
-0.035423316,
-0.018128052,
0.025414964,
-0.0114062475,
-0.047736708,
-0.035185374,
-0.04158001,
-0.... |
from transformers import pipeline
from PIL import Image
import requests
url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg"
image = Image.open(requests.get(url, stream=True).raw)
doc_question_answerer = pipeline("... |
[
0.015239454,
0.010064186,
-0.016426763,
0.015169612,
-0.06738325,
0.004379073,
-0.00258065,
0.026833173,
0.033691626,
-0.01010609,
0.02257283,
0.016147396,
0.00087956863,
-0.025911262,
-0.025939198,
0.008206397,
0.00070627395,
-0.014736594,
-0.0066454355,
-0.0057793986,
0.023... | Optuna provides multi-objective HPO. You can pass direction in hyperparameter_search and define your own compute_objective to return multiple objective values. The Pareto Front (List[BestRun]) will be returned in hyperparameter_search, you should refer to the test case TrainerHyperParameterMultiObjectOptunaIntegrationT... |
[
0.030711321,
-0.012734999,
-0.0009836335,
-0.021616917,
-0.036604267,
-0.009044824,
0.008180714,
0.028359808,
-0.016956389,
0.04946676,
0.04972174,
0.013060811,
-0.0051882016,
-0.020228675,
0.019732874,
0.044622075,
-0.011707983,
-0.055869672,
-0.04722857,
-0.0142153185,
0.00... | For wandb, see wandb object_parameter, it's like following:
def wandb_hp_space(trial):
return {
"method": "random",
"metric": {"name": "objective", "goal": "minimize"},
"parameters": {
"learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
"per... |
[
0.041499224,
0.001413637,
0.015924769,
0.024598328,
-0.036032926,
-0.025309505,
0.018769473,
0.022841306,
-0.010848923,
0.035307806,
0.07887083,
0.0037092718,
-0.008527142,
-0.015980547,
-0.00803908,
0.033020888,
0.004995664,
-0.058009665,
-0.028056597,
-0.0039637615,
0.00863... | For optuna, see optuna object_parameter, it's like following:
def optuna_hp_space(trial):
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
"per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]),
} |
[
0.011368093,
0.024932528,
-0.0010719021,
-0.036795475,
-0.038124125,
-0.022465033,
0.025122333,
0.0336501,
-0.016879281,
0.033568755,
0.037690282,
0.02485118,
0.008026131,
-0.026288293,
0.003011494,
0.03324337,
-0.026830599,
-0.047506023,
-0.08313554,
-0.011984967,
-0.0008384... | Create a [Trainer] with your model_init function, training arguments, training and test datasets, and evaluation function:
trainer = Trainer(
model=None,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
tokenizer=toke... |
[
0.027691474,
-0.009023874,
0.0029569045,
0.0060831867,
-0.048463684,
-0.02636528,
0.0027046397,
0.012101505,
-0.017269332,
0.037681162,
0.036700934,
0.011805995,
0.010717653,
0.001915411,
0.009492366,
0.028368985,
-0.020483905,
-0.05264407,
-0.03641263,
0.007431002,
-0.007985... | best_trials = trainer.hyperparameter_search(
direction=["minimize", "maximize"],
backend="optuna",
hp_space=optuna_hp_space,
n_trials=20,
compute_objective=compute_objective,
)
For raytune, see raytune object_parameter, it's like following:
def ray_hp_space(trial):
return {
"le... |
[
0.021097794,
0.014722332,
-0.0063457047,
0.059276175,
-0.052342765,
-0.013636198,
-0.017809633,
0.009113116,
-0.031066429,
0.04052919,
0.0036582653,
0.017065706,
-0.004095323,
0.0104298685,
-0.005627814,
0.027599724,
-0.013933769,
-0.015339793,
-0.019074311,
0.0112035535,
0.0... | best_trial = trainer.hyperparameter_search(
direction="maximize",
backend="optuna",
hp_space=optuna_hp_space,
n_trials=20,
compute_objective=compute_objective,
)
Hyperparameter search For DDP finetune
Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zer... |
[
0.03837921,
0.00025993193,
0.0059627336,
-0.019650156,
-0.04264977,
0.009134252,
-0.009811122,
0.0056033647,
-0.025372146,
0.04644582,
0.031205786,
-0.005844107,
0.011318378,
-0.012532556,
0.008799306,
0.043515045,
-0.031987328,
-0.01059964,
-0.08373646,
-0.031959414,
0.00390... | Define a model_init function and pass it to the [Trainer], as an example:
def model_init(trial):
return AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args... |
[
-0.0016146781,
-0.016641106,
-0.04260585,
0.0061339727,
-0.055249054,
-0.0027404425,
0.009237042,
0.017723573,
-0.0024608055,
-0.003969041,
0.052853197,
0.02222663,
0.009273123,
-0.027162675,
-0.034927562,
0.00622057,
-0.000074926924,
-0.039257426,
-0.04479965,
-0.0033863138,
... | Call hyperparameter search, get the best trial parameters, backend could be "optuna"/"sigopt"/"wandb"/"ray". direction can be"minimize" or "maximize", which indicates whether to optimize greater or lower objective.
You could define your own compute_objective function, if not defined, the default compute_objective will ... |
[
0.025646942,
0.0057997694,
-0.030487036,
0.0478168,
-0.016856883,
-0.037496828,
0.0059527606,
-0.012288,
-0.01579985,
0.08233725,
-0.018651057,
-0.0032023906,
0.01977763,
-0.024158752,
-0.0075452635,
0.026856964,
0.009151674,
-0.04064011,
-0.06776133,
0.010292157,
0.011766438... | DDP - Distributed DataParallel
ZeRO - depending on the situation and configuration used, this method may or may not be faster, however, it's worth experimenting with it.
Case 2: Your model doesn't fit onto a single GPU:
If your model is too large for a single GPU, you have several alternatives to consider:
PipelinePa... |
[
0.037377898,
0.029152522,
-0.051002923,
0.009980962,
-0.027977468,
-0.028788814,
0.011295903,
0.0033800278,
0.00021606038,
0.07788927,
0.0034115026,
0.010043912,
0.0007212941,
-0.05679426,
0.018632995,
0.03480397,
-0.012121238,
-0.041630473,
-0.09294115,
0.004476395,
0.014898... |
Efficient Training on Multiple GPUs
If training a model on a single GPU is too slow or if the model's weights do not fit in a single GPU's memory, transitioning
to a multi-GPU setup may be a viable option. Prior to making this transition, thoroughly explore all the strategies covered
in the Methods and tools for eff... |
[
0.04928196,
0.030570477,
-0.011404495,
0.034514155,
-0.019480638,
-0.061252832,
-0.0034454719,
-0.006331557,
-0.027046343,
0.06270724,
-0.01858562,
-0.0047862516,
0.011313594,
-0.04763177,
-0.00046324203,
0.017201139,
0.009236871,
-0.03552105,
-0.064609155,
0.021746155,
0.007... | If you are not using ZeRO, you have to use TensorParallel (TP), because PipelineParallel (PP) alone won't be sufficient to accommodate the large layer.
If you are using ZeRO, additionally adopt techniques from the Methods and tools for efficient training on a single GPU.
Parallelization strategy for a multi-Node / mul... |
[
0.042921465,
0.001919308,
-0.035353515,
-0.033659607,
-0.014261619,
0.0034270918,
-0.03647368,
-0.04598143,
0.023933291,
0.051363688,
0.012014458,
-0.026433174,
0.025545238,
-0.04300343,
-0.021064576,
0.03267605,
-0.030408395,
0.0061643254,
-0.054587577,
0.006003814,
-0.01704... | While the main concepts discussed in this guide are likely applicable across frameworks, here we focus on
PyTorch-based implementations. |
[
0.037814442,
0.014507251,
-0.019708494,
-0.012433782,
0.0004827843,
-0.021831164,
0.031994674,
-0.021831164,
0.02043948,
0.04450577,
0.0024618045,
0.01499926,
0.005999001,
-0.026231134,
0.010585233,
0.025964044,
0.013959012,
-0.029661143,
-0.063145906,
0.0036970999,
0.0206503... |
Before diving deeper into the specifics of each technique, let's go over the rough decision process when training
large models on a large infrastructure.
Scalability strategy
Begin by estimating how much vRAM is required to train your model. For models hosted on the 🤗 Hub, use our
Model Memory Calculator, which gi... |
[
0.02525904,
0.016592449,
-0.015510976,
0.0316738,
0.0048481063,
-0.04568849,
-0.015925787,
-0.018710949,
-0.025821999,
0.059643928,
-0.015807271,
-0.005248103,
0.0014425801,
-0.03362934,
-0.010059172,
0.034221925,
-0.016577635,
-0.061095767,
-0.069154955,
0.02079982,
0.025110... | Parallelization strategy for a multi-Node / multi-GPU setup
When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options:
ZeRO - as it requires close to no modifications to the model
A combination of PipelineParallel(PP) with TensorParallel(TP) and DataParallel(DP) - this ... |
[
0.021630429,
0.025095768,
0.011534831,
0.05105787,
-0.0043421546,
-0.038817395,
0.0025116727,
-0.03367528,
0.01115057,
0.05281849,
-0.002539619,
-0.010696442,
-0.0008960279,
-0.04488174,
-0.003023439,
0.03004226,
-0.013910266,
-0.056814805,
-0.043372642,
0.01435042,
0.0269541... | When you have slow inter-node connectivity and still low on GPU memory:
Employ a combination of DataParallel(DP) with PipelineParallel(PP), TensorParallel(TP), and ZeRO. |
[
0.03743388,
0.005778146,
-0.0374906,
0.035023365,
-0.019184863,
0.001049283,
-0.0138675505,
-0.04761476,
0.01501609,
0.06789145,
-0.026331332,
0.0025452203,
0.026558204,
-0.01615045,
-0.002082614,
0.03003218,
-0.014675782,
-0.031223258,
-0.0724856,
0.0103014065,
0.026813434,
... |
In the following sections of this guide we dig deeper into how these different parallelism methods work.
Data Parallelism
Even with only 2 GPUs, you can readily leverage the accelerated training capabilities offered by PyTorch's built-in features,
such as DataParallel (DP) and DistributedDataParallel (DDP). Note tha... |
[
0.031419285,
0.025817953,
-0.004362989,
0.0165448,
-0.028107442,
-0.039943673,
-0.002734068,
0.010273904,
-0.023326874,
0.07729547,
0.00020552758,
0.019007083,
0.0144929,
-0.041326005,
-0.026120339,
0.028467426,
-0.006144903,
-0.029892957,
-0.049476013,
0.03360798,
-0.0010385... | With very fast inter-node connectivity (e.g., NVLINK or NVSwitch) all three strategies (PP, ZeRO, TP) should result in
similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also
make a difference. It's best to experiment with your specific setup to determine the most sui... |
[
0.01787279,
0.01710702,
-0.008921946,
-0.02132598,
0.026050638,
-0.024576891,
-0.005887762,
-0.00003400476,
0.00047318824,
0.08807803,
-0.0003338957,
0.011862215,
0.04025351,
-0.020083409,
-0.027683318,
0.02957607,
-0.02381112,
-0.07189571,
-0.090216406,
-0.024027849,
0.01324... | At the start time the main process replicates the model once from GPU 0 to the rest of GPUs
Then for each batch:
Each GPU directly consumes its mini-batch of data.
During backward, once the local gradients are ready, they are averaged across all processes. |
[
0.022391785,
0.004413362,
-0.031980276,
0.012083457,
-0.015472975,
-0.0051331976,
-0.0050702994,
0.008533199,
0.01582241,
0.03458007,
0.0017480478,
-0.0048955823,
0.042407412,
-0.010014802,
0.0046474836,
0.07972705,
-0.020351086,
-0.030275036,
-0.02123166,
0.0104760565,
-0.00... | Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (NV2 in nvidia-smi topo -m).
Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0. |
[
0.007535396,
0.005005101,
0.008471309,
0.051079407,
-0.0023989712,
-0.0023305349,
-0.016617084,
-0.036963023,
-0.010676069,
0.06925018,
-0.02986044,
-0.009351733,
0.03116258,
-0.039419334,
-0.024711069,
0.055696085,
-0.02750771,
-0.03903461,
-0.059661694,
0.0220476,
0.0214705... | DDP w/o NVlink
```bash
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_d... |
[
0.02981552,
0.011567106,
-0.02271931,
0.03270551,
-0.0072178165,
-0.024421828,
-0.019643335,
-0.03382144,
-0.0067743035,
0.0903622,
-0.04443714,
0.0071748956,
0.028814038,
0.0044029397,
0.00078553666,
0.016166763,
-0.016739039,
-0.02775533,
-0.07674205,
0.007418113,
0.0367686... |
DP:
For each batch:
1. GPU 0 reads the batch of data and then sends a mini-batch to each GPU.
2. The up-to-date model is replicated from GPU 0 to each GPU.
3. forward is executed, and output from each GPU is sent to GPU 0 to compute the loss.
4. The loss is distributed from GPU 0 to all GPUs, and backwar... |
[
-0.008522317,
0.0144828,
0.002895826,
0.0520588,
0.027556224,
-0.0129119335,
-0.020949777,
-0.038816545,
-0.0039528576,
0.063656785,
-0.029861141,
0.014181839,
0.051060494,
-0.04269233,
-0.026983665,
0.04298595,
-0.014725036,
-0.056668635,
-0.06013335,
0.03872846,
0.014548864... |
To disable the NVLink feature on one of the benchmarks, we use NCCL_P2P_DISABLE=1.
Here is the benchmarking code and outputs:
DP
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_... |
[
0.0068123997,
0.004977856,
0.0017276866,
0.047811624,
-0.01424514,
0.0059386273,
-0.010644138,
-0.032953706,
-0.008662074,
0.060702603,
-0.036463927,
-0.008669639,
0.03655471,
-0.034466732,
-0.020501502,
0.05698056,
-0.032257713,
-0.042879157,
-0.07643808,
0.01454018,
0.01358... | DDP w/ NVlink
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_si... |
[
0.0026767768,
0.0051581175,
0.0073986533,
0.021791128,
-0.011663348,
-0.010232477,
-0.0027849649,
-0.041544136,
-0.028896628,
0.08174814,
-0.037439976,
0.0036225482,
0.022377437,
-0.02979005,
0.014881065,
0.014015562,
-0.003633018,
-0.030823069,
-0.07432157,
0.040734474,
0.02... |
Here are the same benchmarking results gathered in a table for convenience:
| Type | NVlink | Time |
| :----- | ----- | ---: |
| 2:DP | Y | 110s |
| 2:DDP | Y | 101s |
| 2:DDP | N | 131s |
As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlin... |
[
0.051105756,
0.00237133,
0.015593291,
-0.01138957,
0.016944231,
-0.03555558,
-0.007767902,
0.0040061106,
-0.02913143,
0.07168602,
0.0071822554,
-0.0005757661,
0.008644576,
-0.0657074,
0.0051127314,
0.04926618,
0.0028833216,
-0.021701263,
-0.073813036,
0.0025599585,
0.04239650... |
While it may appear complex, it is a very similar concept to DataParallel (DP). The difference is that instead of
replicating the full model parameters, gradients and optimizer states, each GPU stores only a slice of it. Then, at
run-time when the full layer parameters are needed just for the given layer, all GPUs ... |
[
0.013074004,
-0.029255582,
-0.013096201,
-0.009714865,
-0.018467862,
-0.01615938,
0.0039436584,
-0.009359715,
-0.038829867,
0.06564378,
-0.01630736,
0.03072058,
0.013806503,
-0.061263584,
-0.008464437,
0.04389077,
-0.018009126,
-0.0022289439,
-0.066413276,
0.004598468,
0.0227... | While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned.
If you pay close attention the way ZeRO partitions the model's weights - it looks very similar to tensor parallelism
which will be discussed later. This is because it partitions/shards each layer's weights, unlik... |
[
0.023784792,
0.0059399353,
0.031888403,
-0.00089709624,
-0.018574432,
-0.044713907,
0.040931385,
-0.026177049,
0.0037105028,
0.057764854,
0.0046686577,
0.034568734,
0.017998286,
-0.045966394,
0.0047031012,
0.03880215,
-0.006469113,
-0.035721026,
-0.03504468,
-0.00097381126,
0... | GPU0:
La | Lb | Lc
---|----|---
a0 | b0 | c0
GPU1:
La | Lb | Lc
---|----|---
a1 | b1 | c1
GPU2:
La | Lb | Lc
---|----|---
a2 | b2 | c2 |
[
0.038138155,
0.029081447,
-0.018451575,
0.014129055,
-0.01986301,
-0.014467211,
-0.00084952597,
0.0032565927,
0.009652158,
0.09744784,
-0.004116686,
0.0019205075,
0.020950992,
-0.06933676,
-0.022729987,
0.051929053,
0.0023505543,
-0.05075286,
-0.030816335,
0.00011830878,
0.02... |
In a way, this is the same horizontal slicing as tensor parallelism, as opposed to Vertical
slicing, where one puts whole layer-groups on different GPUs. Now let's see how this works:
Each of these GPUs will get the usual mini-batch as it works in DP:
x0 => GPU0
x1 => GPU1
x2 => GPU2
The inputs are passed without m... |
[
0.012455002,
0.0137149235,
-0.021145422,
0.036795765,
-0.032606147,
-0.014868586,
-0.033608012,
0.055891916,
-0.034154486,
0.08433881,
-0.001227664,
0.007365984,
0.042685516,
-0.019460468,
-0.026063668,
0.010382963,
-0.032606147,
0.007347009,
-0.06314785,
0.019536365,
0.04396... |
This mechanism is similar to an efficient group backpacking strategy: person A carries the tent, person B carries the stove,
and person C carries the axe. Each night they all share what they have with others and get from others what they don't have,
and in the morning they pack up their allocated type of gear and co... |
[
0.042498328,
0.021025803,
-0.017630702,
-0.0027417676,
0.0019730304,
-0.015828915,
0.0081080375,
-0.024837846,
-0.024793174,
0.082018495,
0.02043017,
0.0058744187,
0.024569811,
-0.045238234,
-0.0014434764,
0.023378547,
-0.0153226275,
-0.035469875,
-0.065042995,
-0.018807074,
... |
In this example, when data moves from layer 0 to 3, it's no different from regular forward pass. However, passing data
from layer 3 to 4 requires moving it from GPU0 to GPU1, introducing a communication overhead. If the participating
GPUs are on the same compute node (e.g. same physical machine) this copying is fas... |
[
0.023384148,
0.008492893,
-0.023298072,
0.0021985404,
0.0071371864,
-0.01365749,
0.009949022,
-0.01036506,
-0.011132576,
0.09560243,
-0.025751255,
0.009949022,
0.0026360964,
-0.045391068,
-0.019381585,
0.044673763,
-0.01859255,
-0.047256064,
-0.058876406,
0.0046553104,
0.0270... |
From Naive Model Parallelism to Pipeline Parallelism
To explain Pipeline parallelism, we'll first look into Naive Model Parallelism (MP), also known as Vertical MP. This approach
involves distributing groups of model layers across multiple GPUs by assigning specific layers to specific GPUs with .to().
As data flows ... |
[
0.027444735,
0.02041123,
0.012160929,
-0.010986334,
0.006699413,
-0.061782304,
0.019257735,
-0.0016897995,
-0.0036398387,
0.0714604,
0.019721946,
0.018033905,
0.03800906,
-0.04439548,
-0.002169836,
0.020847308,
-0.025616024,
-0.042060357,
-0.037418243,
0.021058312,
0.02301362... | | Layer | |
| 0 | |
| 1 | GPU0 |
| 2 | |
| 3 | |
================
| Layer | |
| 4 | |
| 5 | GPU1 |
| 6 | |
| 7 | |
================ |
[
0.0027632795,
0.012280463,
0.0068310513,
-0.003261231,
-0.019300876,
-0.023649184,
0.007855007,
-0.0038117827,
0.006634676,
0.055938877,
0.020633422,
0.028264001,
0.0218257,
-0.07984054,
0.00528109,
0.022597173,
0.008773763,
-0.063962206,
-0.034982838,
0.007469271,
0.04048134... | If we split the weight matrix A column-wise across N GPUs and perform matrix multiplications XA_1 through XA_n in parallel,
then we will end up with N output vectors Y_1, Y_2, , Y_n which can be fed into GeLU independently: |
[
0.033127397,
0.036947645,
-0.036501486,
-0.01745602,
-0.014765115,
-0.03892748,
0.0023405985,
-0.020091154,
0.014737231,
0.064860545,
0.03176103,
-0.026225856,
0.000118511314,
-0.06257398,
-0.031844687,
0.017804582,
0.0059360224,
-0.031677376,
-0.008825608,
0.003670365,
0.013... | Using this principle, we can update a multi-layer perceptron of arbitrary depth, without the need for any synchronization
between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors
provide a helpful illustration for that:
Parallelizing the multi-headed ... |
[
0.0222532,
-0.012223796,
-0.010257679,
0.0068666805,
-0.03655357,
-0.0083652,
-0.0032934293,
-0.00868184,
0.004988929,
0.08323963,
0.009727491,
-0.009808492,
0.035110276,
-0.04485986,
-0.013873272,
0.03608229,
-0.010625866,
-0.04512495,
-0.06798198,
0.036141198,
0.046980612,
... |
At the bottom of the diagram, you can observe that the Pipeline Parallelism (PP) approach minimizes the number of idle
GPU zones, referred to as 'bubbles'. Both parts of the diagram show a parallelism level of degree 4, meaning that 4 GPUs
are involved in the pipeline. You can see that there's a forward path of 4 p... |
[
0.028419398,
-0.0049320245,
-0.047140226,
0.023872294,
-0.016101927,
0.007155213,
-0.017066028,
0.00063224166,
0.0075545236,
0.053816985,
-0.024289591,
-0.025599042,
0.035858806,
-0.056579784,
0.005410478,
0.039801545,
-0.011274227,
-0.02967129,
-0.023512555,
-0.0044355844,
0... |
Special considerations: TP requires very fast network, and therefore it's not advisable to do TP across more than one node.
Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use
nodes that have at least 8 GPUs.
This section is based on the original muc... |
[
0.030946543,
0.030840354,
-0.030931374,
0.009610116,
0.0065913103,
-0.025637086,
0.016489653,
-0.03367712,
-0.029899823,
0.060133383,
0.03128028,
0.017566714,
0.033434402,
-0.08743916,
-0.0152229685,
0.010967819,
-0.0035194107,
-0.023179568,
-0.055096984,
0.0043006595,
0.0184... |
Here the bubble (idle time) is further minimized by prioritizing backward passes. Varuna further attempts to improve the
schedule by using simulations to discover the most efficient scheduling.
OSLO has pipeline parallelism implementation based on the Transformers without nn.Sequential conversion.
Tensor Parallelism... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.