Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Addestramento efficiente su CPU","local":"addestramento-efficiente-su-cpu","sections":[{"title":"Mixed precision con IPEX","local":"mixed-precision-con-ipex","sections":[{"title":"Installazione di IPEX:","local":"installazione-di-ipex","sections":[],"depth":3},{"title":"Utilizzo nel Trainer","local":"utilizzo-nel-trainer","sections":[],"depth":3},{"title":"Esempi pratici","local":"esempi-pratici","sections":[],"depth":3}],"depth":2}],"depth":1}"> | |
| <link href="/docs/transformers/pr_33913/it/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/entry/start.cef8f86b.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/scheduler.36a0863c.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/singletons.39cd3b38.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/index.733708bb.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/paths.64312358.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/entry/app.d399d34f.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/index.9c13489a.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/nodes/0.7859fb6f.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/nodes/22.8e821687.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/CodeBlock.05d8ec32.js"> | |
| <link rel="modulepreload" href="/docs/transformers/pr_33913/it/_app/immutable/chunks/EditOnGithub.e88f2b7b.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Addestramento efficiente su CPU","local":"addestramento-efficiente-su-cpu","sections":[{"title":"Mixed precision con IPEX","local":"mixed-precision-con-ipex","sections":[{"title":"Installazione di IPEX:","local":"installazione-di-ipex","sections":[],"depth":3},{"title":"Utilizzo nel Trainer","local":"utilizzo-nel-trainer","sections":[],"depth":3},{"title":"Esempi pratici","local":"esempi-pratici","sections":[],"depth":3}],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="addestramento-efficiente-su-cpu" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#addestramento-efficiente-su-cpu"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Addestramento efficiente su CPU</span></h1> <p data-svelte-h="svelte-uyppuu">Questa guida si concentra su come addestrare in maniera efficiente grandi modelli su CPU.</p> <h2 class="relative group"><a id="mixed-precision-con-ipex" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#mixed-precision-con-ipex"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Mixed precision con IPEX</span></h2> <p data-svelte-h="svelte-1kpslxw">IPEX è ottimizzato per CPU con AVX-512 o superiore, e funziona per le CPU con solo AVX2. Pertanto, si prevede che le prestazioni saranno più vantaggiose per le le CPU Intel con AVX-512 o superiori, mentre le CPU con solo AVX2 (ad esempio, le CPU AMD o le CPU Intel più vecchie) potrebbero ottenere prestazioni migliori con IPEX, ma non sono garantite. IPEX offre ottimizzazioni delle prestazioni per l’addestramento della CPU sia con Float32 che con BFloat16. L’uso di BFloat16 è l’argomento principale delle seguenti sezioni.</p> <p data-svelte-h="svelte-uzxkpk">Il tipo di dati a bassa precisione BFloat16 è stato supportato in modo nativo su 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) con AVX512 e sarà supportata dalla prossima generazione di Intel® Xeon® Scalable Processors con Intel® Advanced Matrix Extensions (Intel® AMX) instruction set con prestazioni ulteriormente migliorate. L’Auto Mixed Precision per il backende della CPU è stato abilitato da PyTorch-1.10. allo stesso tempo, il supporto di Auto Mixed Precision con BFloat16 per CPU e l’ottimizzazione degli operatori BFloat16 è stata abilitata in modo massiccio in Intel® Extension per PyTorch, and parzialmente aggiornato al branch master di PyTorch. Gli utenti possono ottenere prestazioni migliori ed users experience con IPEX Auto Mixed Precision..</p> <p data-svelte-h="svelte-n2t196">Vedi informazioni più dettagliate su <a href="https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html" rel="nofollow">Auto Mixed Precision</a>.</p> <h3 class="relative group"><a id="installazione-di-ipex" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#installazione-di-ipex"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Installazione di IPEX:</span></h3> <p data-svelte-h="svelte-hyexng">Il rilascio di IPEX segue quello di PyTorch, da installare via pip:</p> <table data-svelte-h="svelte-okkx16"><thead><tr><th align="center">PyTorch Version</th> <th align="center">IPEX version</th></tr></thead> <tbody><tr><td align="center">1.13</td> <td align="center">1.13.0+cpu</td></tr> <tr><td align="center">1.12</td> <td align="center">1.12.300+cpu</td></tr> <tr><td align="center">1.11</td> <td align="center">1.11.200+cpu</td></tr> <tr><td align="center">1.10</td> <td align="center">1.10.100+cpu</td></tr></tbody></table> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-yqjm2f">Vedi altri approcci per <a href="https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html" rel="nofollow">IPEX installation</a>.</p> <h3 class="relative group"><a id="utilizzo-nel-trainer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#utilizzo-nel-trainer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Utilizzo nel Trainer</span></h3> <p data-svelte-h="svelte-1ldgg7n">Per abilitare la auto mixed precision con IPEX in Trainer, l’utende dovrebbe aggiungere <code>use_ipex</code>, <code>bf16</code> e <code>no_cuda</code> negli argomenti del comando di addestramento.</p> <p data-svelte-h="svelte-pr9u3e">Vedi un sempio di un caso d’uso <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering" rel="nofollow">Transformers question-answering</a></p> <ul data-svelte-h="svelte-gvsghn"><li>Training with IPEX using BF16 auto mixed precision on CPU:</li></ul> <pre data-svelte-h="svelte-nhfrun"> python run_qa.py \ | |
| --model_name_or_path google-bert/bert-base-uncased \ | |
| --dataset_name squad \ | |
| --do_train \ | |
| --do_eval \ | |
| --per_device_train_batch_size 12 \ | |
| --learning_rate 3e-5 \ | |
| --num_train_epochs 2 \ | |
| --max_seq_length 384 \ | |
| --doc_stride 128 \ | |
| --output_dir /tmp/debug_squad/ \ | |
| <b>--use_ipex \</b> | |
| <b>--bf16 --no_cuda</b></pre> <h3 class="relative group"><a id="esempi-pratici" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#esempi-pratici"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Esempi pratici</span></h3> <p data-svelte-h="svelte-w7zs5j">Blog: <a href="https://huggingface.co/blog/intel-sapphire-rapids" rel="nofollow">Accelerating PyTorch Transformers with Intel Sapphire Rapids</a></p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/it/perf_train_cpu.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_728xau = { | |
| assets: "/docs/transformers/pr_33913/it", | |
| base: "/docs/transformers/pr_33913/it", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/transformers/pr_33913/it/_app/immutable/entry/start.cef8f86b.js"), | |
| import("/docs/transformers/pr_33913/it/_app/immutable/entry/app.d399d34f.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 22], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 14.5 kB
- Xet hash:
- 81764876c688a36781ab45f03cf26518661daeb3dc7bfb9d6d275be9cdc9270d
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.