Buckets:

rtrm's picture
download
raw
12.4 kB
<meta charset="utf-8" /><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;8-bit optimizers&quot;,&quot;local&quot;:&quot;8-bit-optimizers&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Stable embedding layer&quot;,&quot;local&quot;:&quot;stable-embedding-layer&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Paged optimizers&quot;,&quot;local&quot;:&quot;paged-optimizers&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2}],&quot;depth&quot;:1}">
<link href="/docs/bitsandbytes/pr_1521/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/start.64267764.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/scheduler.852ec091.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/singletons.4904c6bc.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/index.268e315a.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/paths.7199c306.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/app.42c0de65.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/index.28275fd3.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/nodes/0.2c7162e4.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/each.e59479a4.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/nodes/5.5c40409b.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/EditOnGithub.582011f0.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;8-bit optimizers&quot;,&quot;local&quot;:&quot;8-bit-optimizers&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Stable embedding layer&quot;,&quot;local&quot;:&quot;stable-embedding-layer&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Paged optimizers&quot;,&quot;local&quot;:&quot;paged-optimizers&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2}],&quot;depth&quot;:1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="8-bit-optimizers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#8-bit-optimizers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>8-bit optimizers</span></h1> <p data-svelte-h="svelte-xvqqf6">Stateful optimizers maintain gradient statistics over time, for example, the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent, but uses memory that might otherwise be allocated to model parameters. As a result, this limits the maximum size of models that can be trained in practice. Now take a look at the biggest models that can be trained with 8-bit optimizers.</p> <div class="flex justify-center" data-svelte-h="svelte-133ai6n"><figure><img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bitsandbytes/optimizer_largest_model.png"> <figcaption class="text-center">Depending on your GPU size, you can train a much larger model with a 8-bit optimizer.</figcaption></figure></div> <p data-svelte-h="svelte-dtecpb">bitsandbytes optimizers use 8-bit statistics, while maintaining the performance levels of using 32-bit optimizer states.</p> <p data-svelte-h="svelte-t1yq9c">To overcome the resulting computational, quantization and stability challenges, 8-bit optimizers have three components:</p> <ol data-svelte-h="svelte-ixr27h"><li>Block-wise quantization: divides input tensors into smaller blocks that are independently quantized, isolating outliers and distributing the error more equally over all bits. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization.</li> <li>Dynamic quantization: quantizes both small and large values with high precision.</li> <li>Stable embedding layer: improves stability during optimization for models with word embeddings.</li></ol> <p data-svelte-h="svelte-1yk0ad">With these components, performing an optimizer update with 8-bit states is straightforward. The 8-bit optimizer states are dequantized to 32-bit before you perform the update, and then the states are quantized back to 8-bit for storage.</p> <p data-svelte-h="svelte-i24tp8">The 8-bit to 32-bit conversion happens element-by-element in registers, meaning no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization. For GPUs, this makes 8-bit optimizers much faster than regular 32-bit optimizers.</p> <div class="flex justify-center" data-svelte-h="svelte-1ffqqa1"><figure><img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bitsandbytes/optimizer_comparison.png"> <figcaption class="text-center">A comparison of memory and time saved using 8-bit and 32-bit optimizers.</figcaption></figure></div> <h2 class="relative group"><a id="stable-embedding-layer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#stable-embedding-layer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Stable embedding layer</span></h2> <p data-svelte-h="svelte-1cdjq6f">The stable embedding layer improves the training stability of the standard word embedding layer for NLP tasks. It addresses the challenge of non-uniform input distributions and mitigates extreme gradient variations. This means the stable embedding layer can support more aggressive quantization strategies without compromising training stability, and it can help achieve stable training outcomes, which is particularly important for models dealing with diverse and complex language data.</p> <p data-svelte-h="svelte-1xbcj8o">There are three features of the stable embedding layer:</p> <ul data-svelte-h="svelte-17lmhka"><li>Initialization: utilizes Xavier uniform initialization to maintain consistent variance, reducing the likelihood of large gradients.</li> <li>Normalization: incorporates layer normalization before adding positional embeddings, aiding in output stability.</li> <li>Optimizer states: employs 32-bit optimizer states exclusively for this layer to enhance stability, while the rest of the model may use standard 16-bit precision.</li></ul> <h2 class="relative group"><a id="paged-optimizers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#paged-optimizers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Paged optimizers</span></h2> <p data-svelte-h="svelte-phb6f5">Paged optimizers are built on top of the <a href="https://developer.nvidia.com/blog/unified-memory-cuda-beginners/" rel="nofollow">unified memory</a> feature of CUDA. Unified memory provides a single memory space the GPU and CPU can easily access. While this feature is not supported by PyTorch, it has been added to bitsandbytes.</p> <p data-svelte-h="svelte-71whgn">Paged optimizers works like regular CPU paging, which means that it <em>only becomes active if you run out of GPU memory</em>. When that happens, memory is transferred page-by-page from GPU to CPU. The memory is mapped, meaning that pages are pre-allocated on the CPU but they are not updated automatically. Pages are only updated if the memory is accessed or a swapping operation is launched.</p> <p data-svelte-h="svelte-1b4jzri">The unified memory feature is less efficient than regular asynchronous memory transfers, and you usually won’t be able to get full PCIe memory bandwidth utilization. If you do a manual prefetch, transfer speeds can be high but still only about half or worse than the full PCIe memory bandwidth (tested on 16x lanes PCIe 3.0).</p> <p data-svelte-h="svelte-1h5ow31">This means performance depends highly on the particular use-case. For example, if you evict 1 GB of memory per forward-backward-optimizer loop, then you can expect about 50% of the PCIe bandwidth as time in the best case. So, 1 GB for PCIe 3.0 with 16x lanes would run at 16 GB/s, which is <code>1/(16*0.5) = 1/8 = 125ms</code> of overhead per optimizer step. Other overhead can be estimated for the particular use-case given a PCIe interface, lanes, and the memory evicted in each iteration.</p> <p data-svelte-h="svelte-1ttez5b">Compared to CPU offloading, a paged optimizer has zero overhead if all the memory fits onto the device and only some overhead if some of memory needs to be evicted. For offloading, you usually offload fixed parts of the model and need to off and onload all this memory with each iteration through the model (sometimes twice for both forward and backward pass).</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/docs/source/explanations/optimizers.mdx" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p>
<script>
{
__sveltekit_1s46fht = {
assets: "/docs/bitsandbytes/pr_1521/en",
base: "/docs/bitsandbytes/pr_1521/en",
env: {}
};
const element = document.currentScript.parentElement;
const data = [null,null];
Promise.all([
import("/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/start.64267764.js"),
import("/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/app.42c0de65.js")
]).then(([kit, app]) => {
kit.start(app, element, {
node_ids: [0, 5],
data,
form: null,
error: null
});
});
}
</script>

Xet Storage Details

Size:
12.4 kB
·
Xet hash:
e82b88be66f9faa975abdd3f3ef3837f717dfcd3050ab9541f95ca889757030e

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.