Buckets:
Dance Diffusion
Dance Diffusion is by Zach Evans.
Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
DanceDiffusionPipeline[[diffusers.DanceDiffusionPipeline]]
diffusers.DanceDiffusionPipeline[[diffusers.DanceDiffusionPipeline]]
Pipeline for audio generation.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
__call__diffusers.DanceDiffusionPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L59[{"name": "batch_size", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "return_dict", "val": ": bool = True"}]- batch_size (int, optional, defaults to 1) --
The number of audio samples to generate.
- num_inference_steps (
int, optional, defaults to 50) -- The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at the expense of slower inference. - generator (
torch.Generator, optional) -- Atorch.Generatorto make generation deterministic. - audio_length_in_s (
float, optional, defaults toself.unet.config.sample_size/self.unet.config.sample_rate) -- The length of the generated audio sample in seconds. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a AudioPipelineOutput instead of a plain tuple.0AudioPipelineOutput ortupleIfreturn_dictisTrue, AudioPipelineOutput is returned, otherwise atupleis returned where the first element is a list with the generated audio.
The call function to the pipeline for generation.
Example:
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
model_id = "harmonai/maestro-150k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
audios = pipe(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To display in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
Parameters:
unet (UNet1DModel) : A UNet1DModel to denoise the encoded audio.
scheduler (SchedulerMixin) : A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of IPNDMScheduler.
Returns:
[AudioPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) or tuple``
If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is
returned where the first element is a list with the generated audio.
AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
diffusers.AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
Output class for audio pipelines.
Parameters:
audios (np.ndarray) : List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate).
Xet Storage Details
- Size:
- 4.78 kB
- Xet hash:
- c2675b82f01e8b72b2e028924f52a12c7b198ca265d83cc0d9dc495b9387a00d
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.