Buckets:

rtrm's picture
|
download
raw
5.51 kB

Dance Diffusion

Dance Diffusion is by Zach Evans.

Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai.

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

DanceDiffusionPipeline[[diffusers.DanceDiffusionPipeline]]

class diffusers.DanceDiffusionPipelinediffusers.DanceDiffusionPipelinehttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L37[{"name": "unet", "val": ": UNet1DModel"}, {"name": "scheduler", "val": ": SchedulerMixin"}]- unet (UNet1DModel) -- A UNet1DModel to denoise the encoded audio.

  • scheduler (SchedulerMixin) -- A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of IPNDMScheduler.0

Pipeline for audio generation.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

calldiffusers.DanceDiffusionPipeline.callhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L59[{"name": "batch_size", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "return_dict", "val": ": bool = True"}]- batch_size (int, optional, defaults to 1) -- The number of audio samples to generate.

  • num_inference_steps (int, optional, defaults to 50) -- The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at the expense of slower inference.
  • generator (torch.Generator, optional) -- A torch.Generator to make generation deterministic.
  • audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) -- The length of the generated audio sample in seconds.
  • return_dict (bool, optional, defaults to True) -- Whether or not to return a AudioPipelineOutput instead of a plain tuple.0AudioPipelineOutput or tupleIf return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is returned where the first element is a list with the generated audio.

The call function to the pipeline for generation.

Example:

from diffusers import DiffusionPipeline
from scipy.io.wavfile import write

model_id = "harmonai/maestro-150k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")

audios = pipe(audio_length_in_s=4.0).audios

# To save locally
for i, audio in enumerate(audios):
    write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose())

# To display in google colab
import IPython.display as ipd

for audio in audios:
    display(ipd.Audio(audio, rate=pipe.unet.sample_rate))

AudioPipelineOutput[[diffusers.AudioPipelineOutput]]

class diffusers.AudioPipelineOutputdiffusers.AudioPipelineOutputhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/pipelines/pipeline_utils.py#L132[{"name": "audios", "val": ": ndarray"}]- audios (np.ndarray) -- List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate).0

Output class for audio pipelines.

Xet Storage Details

Size:
5.51 kB
·
Xet hash:
d3224c7a50220c99acb5f1b8b3662292949d5920e50e9c4d38b6f2a087b4c3c6

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.