id
int64
0
190k
prompt
stringlengths
21
13.4M
docstring
stringlengths
1
12k
2,660
import cv2 import numpy as np import torch from torch.nn import functional as F The provided code snippet includes necessary dependencies for implementing the `usm_sharp` function. Write a Python function `def usm_sharp(img, weight=0.5, radius=50, threshold=10)` to solve the following problem: USM sharpening. Input image: I; Blurry image: B. 1. sharp = I + weight * (I - B) 2. Mask = 1 if abs(I - B) > threshold, else: 0 3. Blur mask: 4. Out = Mask * sharp + (1 - Mask) * I Args: img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. weight (float): Sharp weight. Default: 1. radius (float): Kernel size of Gaussian blur. Default: 50. threshold (int): Here is the function: def usm_sharp(img, weight=0.5, radius=50, threshold=10): """USM sharpening. Input image: I; Blurry image: B. 1. sharp = I + weight * (I - B) 2. Mask = 1 if abs(I - B) > threshold, else: 0 3. Blur mask: 4. Out = Mask * sharp + (1 - Mask) * I Args: img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. weight (float): Sharp weight. Default: 1. radius (float): Kernel size of Gaussian blur. Default: 50. threshold (int): """ if radius % 2 == 0: radius += 1 blur = cv2.GaussianBlur(img, (radius, radius), 0) residual = img - blur mask = np.abs(residual) * 255 > threshold mask = mask.astype('float32') soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) sharp = img + weight * residual sharp = np.clip(sharp, 0, 1) return soft_mask * sharp + (1 - soft_mask) * img
USM sharpening. Input image: I; Blurry image: B. 1. sharp = I + weight * (I - B) 2. Mask = 1 if abs(I - B) > threshold, else: 0 3. Blur mask: 4. Out = Mask * sharp + (1 - Mask) * I Args: img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. weight (float): Sharp weight. Default: 1. radius (float): Kernel size of Gaussian blur. Default: 50. threshold (int):
2,661
import itertools import numpy as np import torch import torch.nn as nn from torch.nn import functional as F The provided code snippet includes necessary dependencies for implementing the `diff_round` function. Write a Python function `def diff_round(x)` to solve the following problem: Differentiable rounding function Here is the function: def diff_round(x): """ Differentiable rounding function """ return torch.round(x) + (x - torch.round(x))**3
Differentiable rounding function
2,662
import itertools import numpy as np import torch import torch.nn as nn from torch.nn import functional as F The provided code snippet includes necessary dependencies for implementing the `quality_to_factor` function. Write a Python function `def quality_to_factor(quality)` to solve the following problem: Calculate factor corresponding to quality Args: quality(float): Quality for jpeg compression. Returns: float: Compression factor. Here is the function: def quality_to_factor(quality): """ Calculate factor corresponding to quality Args: quality(float): Quality for jpeg compression. Returns: float: Compression factor. """ if quality < 50: quality = 5000. / quality else: quality = 200. - quality * 2 return quality / 100.
Calculate factor corresponding to quality Args: quality(float): Quality for jpeg compression. Returns: float: Compression factor.
2,663
import numpy as np import torch def _convert_input_type_range(img): """Convert the type and range of the input image. It converts the input image to np.float32 type and range of [0, 1]. It is mainly used for pre-processing the input image in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with type of np.float32 and range of [0, 1]. """ img_type = img.dtype img = img.astype(np.float32) if img_type == np.float32: pass elif img_type == np.uint8: img /= 255. else: raise TypeError(f'The img type should be np.float32 or np.uint8, but got {img_type}') return img def _convert_output_type_range(img, dst_type): """Convert the type and range of the image according to dst_type. It converts the image to desired type and range. If `dst_type` is np.uint8, images will be converted to np.uint8 type with range [0, 255]. If `dst_type` is np.float32, it converts the image to np.float32 type with range [0, 1]. It is mainly used for post-processing images in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The image to be converted with np.float32 type and range [0, 255]. dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it converts the image to np.uint8 type with range [0, 255]. If dst_type is np.float32, it converts the image to np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with desired type and range. """ if dst_type not in (np.uint8, np.float32): raise TypeError(f'The dst_type should be np.float32 or np.uint8, but got {dst_type}') if dst_type == np.uint8: img = img.round() else: img /= 255. return img.astype(dst_type) The provided code snippet includes necessary dependencies for implementing the `rgb2ycbcr` function. Write a Python function `def rgb2ycbcr(img, y_only=False)` to solve the following problem: Convert a RGB image to YCbCr image. This function produces the same results as Matlab's `rgb2ycbcr` function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image. Here is the function: def rgb2ycbcr(img, y_only=False): """Convert a RGB image to YCbCr image. This function produces the same results as Matlab's `rgb2ycbcr` function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image. """ img_type = img.dtype img = _convert_input_type_range(img) if y_only: out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 else: out_img = np.matmul( img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]]) + [16, 128, 128] out_img = _convert_output_type_range(out_img, img_type) return out_img
Convert a RGB image to YCbCr image. This function produces the same results as Matlab's `rgb2ycbcr` function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image.
2,664
import numpy as np import torch def _convert_input_type_range(img): """Convert the type and range of the input image. It converts the input image to np.float32 type and range of [0, 1]. It is mainly used for pre-processing the input image in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with type of np.float32 and range of [0, 1]. """ img_type = img.dtype img = img.astype(np.float32) if img_type == np.float32: pass elif img_type == np.uint8: img /= 255. else: raise TypeError(f'The img type should be np.float32 or np.uint8, but got {img_type}') return img def _convert_output_type_range(img, dst_type): """Convert the type and range of the image according to dst_type. It converts the image to desired type and range. If `dst_type` is np.uint8, images will be converted to np.uint8 type with range [0, 255]. If `dst_type` is np.float32, it converts the image to np.float32 type with range [0, 1]. It is mainly used for post-processing images in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The image to be converted with np.float32 type and range [0, 255]. dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it converts the image to np.uint8 type with range [0, 255]. If dst_type is np.float32, it converts the image to np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with desired type and range. """ if dst_type not in (np.uint8, np.float32): raise TypeError(f'The dst_type should be np.float32 or np.uint8, but got {dst_type}') if dst_type == np.uint8: img = img.round() else: img /= 255. return img.astype(dst_type) The provided code snippet includes necessary dependencies for implementing the `bgr2ycbcr` function. Write a Python function `def bgr2ycbcr(img, y_only=False)` to solve the following problem: Convert a BGR image to YCbCr image. The bgr version of rgb2ycbcr. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image. Here is the function: def bgr2ycbcr(img, y_only=False): """Convert a BGR image to YCbCr image. The bgr version of rgb2ycbcr. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image. """ img_type = img.dtype img = _convert_input_type_range(img) if y_only: out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 else: out_img = np.matmul( img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128] out_img = _convert_output_type_range(out_img, img_type) return out_img
Convert a BGR image to YCbCr image. The bgr version of rgb2ycbcr. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. y_only (bool): Whether to only return Y channel. Default: False. Returns: ndarray: The converted YCbCr image. The output image has the same type and range as input image.
2,665
import numpy as np import torch def _convert_input_type_range(img): """Convert the type and range of the input image. It converts the input image to np.float32 type and range of [0, 1]. It is mainly used for pre-processing the input image in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with type of np.float32 and range of [0, 1]. """ img_type = img.dtype img = img.astype(np.float32) if img_type == np.float32: pass elif img_type == np.uint8: img /= 255. else: raise TypeError(f'The img type should be np.float32 or np.uint8, but got {img_type}') return img def _convert_output_type_range(img, dst_type): """Convert the type and range of the image according to dst_type. It converts the image to desired type and range. If `dst_type` is np.uint8, images will be converted to np.uint8 type with range [0, 255]. If `dst_type` is np.float32, it converts the image to np.float32 type with range [0, 1]. It is mainly used for post-processing images in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The image to be converted with np.float32 type and range [0, 255]. dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it converts the image to np.uint8 type with range [0, 255]. If dst_type is np.float32, it converts the image to np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with desired type and range. """ if dst_type not in (np.uint8, np.float32): raise TypeError(f'The dst_type should be np.float32 or np.uint8, but got {dst_type}') if dst_type == np.uint8: img = img.round() else: img /= 255. return img.astype(dst_type) The provided code snippet includes necessary dependencies for implementing the `ycbcr2rgb` function. Write a Python function `def ycbcr2rgb(img)` to solve the following problem: Convert a YCbCr image to RGB image. This function produces the same results as Matlab's ycbcr2rgb function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted RGB image. The output image has the same type and range as input image. Here is the function: def ycbcr2rgb(img): """Convert a YCbCr image to RGB image. This function produces the same results as Matlab's ycbcr2rgb function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted RGB image. The output image has the same type and range as input image. """ img_type = img.dtype img = _convert_input_type_range(img) * 255 out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] # noqa: E126 out_img = _convert_output_type_range(out_img, img_type) return out_img
Convert a YCbCr image to RGB image. This function produces the same results as Matlab's ycbcr2rgb function. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted RGB image. The output image has the same type and range as input image.
2,666
import numpy as np import torch def _convert_input_type_range(img): """Convert the type and range of the input image. It converts the input image to np.float32 type and range of [0, 1]. It is mainly used for pre-processing the input image in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with type of np.float32 and range of [0, 1]. """ img_type = img.dtype img = img.astype(np.float32) if img_type == np.float32: pass elif img_type == np.uint8: img /= 255. else: raise TypeError(f'The img type should be np.float32 or np.uint8, but got {img_type}') return img def _convert_output_type_range(img, dst_type): """Convert the type and range of the image according to dst_type. It converts the image to desired type and range. If `dst_type` is np.uint8, images will be converted to np.uint8 type with range [0, 255]. If `dst_type` is np.float32, it converts the image to np.float32 type with range [0, 1]. It is mainly used for post-processing images in colorspace conversion functions such as rgb2ycbcr and ycbcr2rgb. Args: img (ndarray): The image to be converted with np.float32 type and range [0, 255]. dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it converts the image to np.uint8 type with range [0, 255]. If dst_type is np.float32, it converts the image to np.float32 type with range [0, 1]. Returns: (ndarray): The converted image with desired type and range. """ if dst_type not in (np.uint8, np.float32): raise TypeError(f'The dst_type should be np.float32 or np.uint8, but got {dst_type}') if dst_type == np.uint8: img = img.round() else: img /= 255. return img.astype(dst_type) The provided code snippet includes necessary dependencies for implementing the `ycbcr2bgr` function. Write a Python function `def ycbcr2bgr(img)` to solve the following problem: Convert a YCbCr image to BGR image. The bgr version of ycbcr2rgb. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted BGR image. The output image has the same type and range as input image. Here is the function: def ycbcr2bgr(img): """Convert a YCbCr image to BGR image. The bgr version of ycbcr2rgb. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted BGR image. The output image has the same type and range as input image. """ img_type = img.dtype img = _convert_input_type_range(img) * 255 out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0.00791071, -0.00153632, 0], [0, -0.00318811, 0.00625893]]) * 255.0 + [-276.836, 135.576, -222.921] # noqa: E126 out_img = _convert_output_type_range(out_img, img_type) return out_img
Convert a YCbCr image to BGR image. The bgr version of ycbcr2rgb. It implements the ITU-R BT.601 conversion for standard-definition television. See more details in https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. In OpenCV, it implements a JPEG conversion. See more details in https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. Args: img (ndarray): The input image. It accepts: 1. np.uint8 type with range [0, 255]; 2. np.float32 type with range [0, 1]. Returns: ndarray: The converted BGR image. The output image has the same type and range as input image.
2,667
import datetime import logging import time from .dist_util import get_dist_info, master_only def init_tb_logger(log_dir): from torch.utils.tensorboard import SummaryWriter tb_logger = SummaryWriter(log_dir=log_dir) return tb_logger
null
2,668
import datetime import logging import time from .dist_util import get_dist_info, master_only def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): """Get the root logger. The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If `log_file` is specified, a FileHandler will also be added. Args: logger_name (str): root logger name. Default: 'basicsr'. log_file (str | None): The log filename. If specified, a FileHandler will be added to the root logger. log_level (int): The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to "Error" and be silent most of the time. Returns: logging.Logger: The root logger. """ logger = logging.getLogger(logger_name) # if the logger has been initialized, just return it if logger_name in initialized_logger: return logger format_str = '%(asctime)s %(levelname)s: %(message)s' stream_handler = logging.StreamHandler() stream_handler.setFormatter(logging.Formatter(format_str)) logger.addHandler(stream_handler) logger.propagate = False rank, _ = get_dist_info() if rank != 0: logger.setLevel('ERROR') elif log_file is not None: logger.setLevel(log_level) # add file handler file_handler = logging.FileHandler(log_file, 'w') file_handler.setFormatter(logging.Formatter(format_str)) file_handler.setLevel(log_level) logger.addHandler(file_handler) initialized_logger[logger_name] = True return logger The provided code snippet includes necessary dependencies for implementing the `init_wandb_logger` function. Write a Python function `def init_wandb_logger(opt)` to solve the following problem: We now only use wandb to sync tensorboard log. Here is the function: def init_wandb_logger(opt): """We now only use wandb to sync tensorboard log.""" import wandb logger = get_root_logger() project = opt['logger']['wandb']['project'] resume_id = opt['logger']['wandb'].get('resume_id') if resume_id: wandb_id = resume_id resume = 'allow' logger.warning(f'Resume wandb logger with id={wandb_id}.') else: wandb_id = wandb.util.generate_id() resume = 'never' wandb.init(id=wandb_id, resume=resume, name=opt['name'], config=opt, project=project, sync_tensorboard=True) logger.info(f'Use wandb logger with id={wandb_id}; project={project}.')
We now only use wandb to sync tensorboard log.
2,669
import datetime import logging import time from .dist_util import get_dist_info, master_only The provided code snippet includes necessary dependencies for implementing the `get_env_info` function. Write a Python function `def get_env_info()` to solve the following problem: Get environment information. Currently, only log the software version. Here is the function: def get_env_info(): """Get environment information. Currently, only log the software version. """ import torch import torchvision from basicsr.version import __version__ msg = r""" ____ _ _____ ____ / __ ) ____ _ _____ (_)_____/ ___/ / __ \ / __ |/ __ `// ___// // ___/\__ \ / /_/ / / /_/ // /_/ /(__ )/ // /__ ___/ // _, _/ /_____/ \__,_//____//_/ \___//____//_/ |_| ______ __ __ __ __ / ____/____ ____ ____/ / / / __ __ _____ / /__ / / / / __ / __ \ / __ \ / __ / / / / / / // ___// //_/ / / / /_/ // /_/ // /_/ // /_/ / / /___/ /_/ // /__ / /< /_/ \____/ \____/ \____/ \____/ /_____/\____/ \___//_/|_| (_) """ msg += ('\nVersion Information: ' f'\n\tBasicSR: {__version__}' f'\n\tPyTorch: {torch.__version__}' f'\n\tTorchVision: {torchvision.__version__}') return msg
Get environment information. Currently, only log the software version.
2,670
import re The provided code snippet includes necessary dependencies for implementing the `read_data_from_tensorboard` function. Write a Python function `def read_data_from_tensorboard(log_path, tag)` to solve the following problem: Get raw data (steps and values) from tensorboard events. Args: log_path (str): Path to the tensorboard log. tag (str): tag to be read. Here is the function: def read_data_from_tensorboard(log_path, tag): """Get raw data (steps and values) from tensorboard events. Args: log_path (str): Path to the tensorboard log. tag (str): tag to be read. """ from tensorboard.backend.event_processing.event_accumulator import EventAccumulator # tensorboard event event_acc = EventAccumulator(log_path) event_acc.Reload() scalar_list = event_acc.Tags()['scalars'] print('tag list: ', scalar_list) steps = [int(s.step) for s in event_acc.Scalars(tag)] values = [s.value for s in event_acc.Scalars(tag)] return steps, values
Get raw data (steps and values) from tensorboard events. Args: log_path (str): Path to the tensorboard log. tag (str): tag to be read.
2,671
import re The provided code snippet includes necessary dependencies for implementing the `read_data_from_txt_2v` function. Write a Python function `def read_data_from_txt_2v(path, pattern, step_one=False)` to solve the following problem: Read data from txt with 2 returned values (usually [step, value]). Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern. step_one (bool): add 1 to steps. Default: False. Here is the function: def read_data_from_txt_2v(path, pattern, step_one=False): """Read data from txt with 2 returned values (usually [step, value]). Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern. step_one (bool): add 1 to steps. Default: False. """ with open(path) as f: lines = f.readlines() lines = [line.strip() for line in lines] steps = [] values = [] pattern = re.compile(pattern) for line in lines: match = pattern.match(line) if match: steps.append(int(match.group(1))) values.append(float(match.group(2))) if step_one: steps = [v + 1 for v in steps] return steps, values
Read data from txt with 2 returned values (usually [step, value]). Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern. step_one (bool): add 1 to steps. Default: False.
2,672
import re The provided code snippet includes necessary dependencies for implementing the `read_data_from_txt_1v` function. Write a Python function `def read_data_from_txt_1v(path, pattern)` to solve the following problem: Read data from txt with 1 returned values. Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern. Here is the function: def read_data_from_txt_1v(path, pattern): """Read data from txt with 1 returned values. Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern. """ with open(path) as f: lines = f.readlines() lines = [line.strip() for line in lines] data = [] pattern = re.compile(pattern) for line in lines: match = pattern.match(line) if match: data.append(float(match.group(1))) return data
Read data from txt with 1 returned values. Args: path (str): path to the txt file. pattern (str): re (regular expression) pattern.
2,673
import re The provided code snippet includes necessary dependencies for implementing the `smooth_data` function. Write a Python function `def smooth_data(values, smooth_weight)` to solve the following problem: Smooth data using 1st-order IIR low-pass filter (what tensorflow does). Reference: https://github.com/tensorflow/tensorboard/blob/f801ebf1f9fbfe2baee1ddd65714d0bccc640fb1/tensorboard/plugins/scalar/vz_line_chart/vz-line-chart.ts#L704 # noqa: E501 Args: values (list): A list of values to be smoothed. smooth_weight (float): Smooth weight. Here is the function: def smooth_data(values, smooth_weight): """ Smooth data using 1st-order IIR low-pass filter (what tensorflow does). Reference: https://github.com/tensorflow/tensorboard/blob/f801ebf1f9fbfe2baee1ddd65714d0bccc640fb1/tensorboard/plugins/scalar/vz_line_chart/vz-line-chart.ts#L704 # noqa: E501 Args: values (list): A list of values to be smoothed. smooth_weight (float): Smooth weight. """ values_sm = [] last_sm_value = values[0] for value in values: value_sm = last_sm_value * smooth_weight + (1 - smooth_weight) * value values_sm.append(value_sm) last_sm_value = value_sm return values_sm
Smooth data using 1st-order IIR low-pass filter (what tensorflow does). Reference: https://github.com/tensorflow/tensorboard/blob/f801ebf1f9fbfe2baee1ddd65714d0bccc640fb1/tensorboard/plugins/scalar/vz_line_chart/vz-line-chart.ts#L704 # noqa: E501 Args: values (list): A list of values to be smoothed. smooth_weight (float): Smooth weight.
2,674
import os import PIL import numpy as np import copy import torch from omegaconf import OmegaConf from PIL import Image from tqdm import trange from itertools import islice from einops import rearrange, repeat from torch import autocast from pytorch_lightning import seed_everything import torch.nn.functional as F from ldm.util import instantiate_from_config from scripts.wavelet_color_fix import ( wavelet_reconstruction, adaptive_instance_normalization, ) from cog import BasePredictor, Input, Path def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,675
import os import PIL import numpy as np import copy import torch from omegaconf import OmegaConf from PIL import Image from tqdm import trange from itertools import islice from einops import rearrange, repeat from torch import autocast from pytorch_lightning import seed_everything import torch.nn.functional as F from ldm.util import instantiate_from_config from scripts.wavelet_color_fix import ( wavelet_reconstruction, adaptive_instance_normalization, ) from cog import BasePredictor, Input, Path def read_image(im_path): im = np.array(Image.open(im_path).convert("RGB")) im = im.astype(np.float32) / 255.0 im = im[None].transpose(0, 3, 1, 2) im = (torch.from_numpy(im) - 0.5) / 0.5 return im.cuda()
null
2,676
import os import PIL import numpy as np import copy import torch from omegaconf import OmegaConf from PIL import Image from tqdm import trange from itertools import islice from einops import rearrange, repeat from torch import autocast from pytorch_lightning import seed_everything import torch.nn.functional as F from ldm.util import instantiate_from_config from scripts.wavelet_color_fix import ( wavelet_reconstruction, adaptive_instance_normalization, ) from cog import BasePredictor, Input, Path def space_timesteps(num_timesteps, section_counts): if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim") :]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] # [250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
null
2,677
import os import PIL import numpy as np import copy import torch from omegaconf import OmegaConf from PIL import Image from tqdm import trange from itertools import islice from einops import rearrange, repeat from torch import autocast from pytorch_lightning import seed_everything import torch.nn.functional as F from ldm.util import instantiate_from_config from scripts.wavelet_color_fix import ( wavelet_reconstruction, adaptive_instance_normalization, ) from cog import BasePredictor, Input, Path def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,678
import os import PIL import numpy as np import copy import torch from omegaconf import OmegaConf from PIL import Image from tqdm import trange from itertools import islice from einops import rearrange, repeat from torch import autocast from pytorch_lightning import seed_everything import torch.nn.functional as F from ldm.util import instantiate_from_config from scripts.wavelet_color_fix import ( wavelet_reconstruction, adaptive_instance_normalization, ) from cog import BasePredictor, Input, Path def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.0 * image - 1.0
null
2,679
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.realesrgan_dataset import RealESRGANDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,680
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.realesrgan_dataset import RealESRGANDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,681
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.realesrgan_dataset import RealESRGANDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,682
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.realesrgan_dataset import RealESRGANDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,683
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.realesrgan_dataset import RealESRGANDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,684
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.ffhq_degradation_dataset import FFHQDegradationDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,685
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.ffhq_degradation_dataset import FFHQDegradationDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,686
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.ffhq_degradation_dataset import FFHQDegradationDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,687
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.ffhq_degradation_dataset import FFHQDegradationDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,688
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy from basicsr.utils import DiffJPEG from basicsr.data.ffhq_degradation_dataset import FFHQDegradationDataset from torch.utils.data import random_split, DataLoader, Dataset, Subset def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,689
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,690
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,691
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,692
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) print('>>>>>>>>>>>>>>>>>>>load results>>>>>>>>>>>>>>>>>>>>>>>') if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,693
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,694
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization from util_image import ImageSpliterTh from pathlib import Path def read_image(im_path): im = np.array(Image.open(im_path).convert("RGB")) im = im.astype(np.float32)/255.0 im = im[None].transpose(0,3,1,2) im = (torch.from_numpy(im) - 0.5) / 0.5 return im.cuda()
null
2,695
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,696
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,697
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,698
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) print('>>>>>>>>>>>>>>>>>>>load results>>>>>>>>>>>>>>>>>>>>>>>') if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,699
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy import torch.nn.functional as F from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,700
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from util_image import ImageSpliterTh from pathlib import Path from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,701
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from util_image import ImageSpliterTh from pathlib import Path from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,702
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from util_image import ImageSpliterTh from pathlib import Path from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,703
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from util_image import ImageSpliterTh from pathlib import Path from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 8, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,704
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from util_image import ImageSpliterTh from pathlib import Path from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def read_image(im_path): im = np.array(Image.open(im_path).convert("RGB")) im = im.astype(np.float32)/255.0 im = im[None].transpose(0,3,1,2) im = (torch.from_numpy(im) - 0.5) / 0.5 return im.cuda()
null
2,705
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,706
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,707
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,708
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) print('>>>>>>>>>>>>>>>>>>>load results>>>>>>>>>>>>>>>>>>>>>>>') if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,709
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler from basicsr.metrics import calculate_niqe import math import copy def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,710
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,711
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,712
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,713
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy import torch.nn.functional as F import cv2 from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 8, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,714
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,715
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,716
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,717
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.plms import PLMSSampler import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,718
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def calc_mean_std(feat, eps=1e-5): """Calculate mean and std for adaptive_instance_normalization. Args: feat (Tensor): 4D tensor. eps (float): A small value added to the variance to avoid divide-by-zero. Default: 1e-5. """ size = feat.size() assert len(size) == 4, 'The input feature should be 4D tensor.' b, c = size[:2] feat_var = feat.view(b, c, -1).var(dim=2) + eps feat_std = feat_var.sqrt().view(b, c, 1, 1) feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) return feat_mean, feat_std The provided code snippet includes necessary dependencies for implementing the `adaptive_instance_normalization` function. Write a Python function `def adaptive_instance_normalization(content_feat, style_feat)` to solve the following problem: Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. Here is the function: def adaptive_instance_normalization(content_feat, style_feat): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size)
Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features.
2,719
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization The provided code snippet includes necessary dependencies for implementing the `space_timesteps` function. Write a Python function `def space_timesteps(num_timesteps, section_counts)` to solve the following problem: Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. Here is the function: def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim"):]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] #[250,] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps)
Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use.
2,720
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ())
null
2,721
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def instantiate_from_config(config): if not "target" in config: if config == '__is_first_stage__': return None elif config == "__is_unconditional__": return None raise KeyError("Expected key `target` to instantiate.") return get_obj_from_str(config["target"])(**config.get("params", dict())) def load_model_from_config(config, ckpt, verbose=False): print(f"Loading model from {ckpt}") pl_sd = torch.load(ckpt, map_location="cpu") if "global_step" in pl_sd: print(f"Global Step: {pl_sd['global_step']}") sd = pl_sd["state_dict"] model = instantiate_from_config(config.model) m, u = model.load_state_dict(sd, strict=False) print('>>>>>>>>>>>>>>>>>>>load results>>>>>>>>>>>>>>>>>>>>>>>') if len(m) > 0 and verbose: print("missing keys:") print(m) if len(u) > 0 and verbose: print("unexpected keys:") print(u) model.cuda() model.eval() return model
null
2,722
import argparse, os, sys, glob import PIL import torch import numpy as np import torchvision from omegaconf import OmegaConf from PIL import Image from tqdm import tqdm, trange from itertools import islice from einops import rearrange, repeat from torchvision.utils import make_grid from torch import autocast from contextlib import nullcontext import time from pytorch_lightning import seed_everything from ldm.util import instantiate_from_config from ldm.models.diffusion.ddim import DDIMSampler from basicsr.metrics import calculate_niqe import math import copy from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization def load_img(path): image = Image.open(path).convert("RGB") w, h = image.size print(f"loaded input image of size ({w}, {h}) from {path}") w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 image = image.resize((w, h), resample=PIL.Image.LANCZOS) image = np.array(image).astype(np.float32) / 255.0 image = image[None].transpose(0, 3, 1, 2) image = torch.from_numpy(image) return 2.*image - 1.
null
2,723
import torch from PIL import Image from torch import Tensor from torch.nn import functional as F from torchvision.transforms import ToTensor, ToPILImage def adaptive_instance_normalization(content_feat:Tensor, style_feat:Tensor): """Adaptive instance normalization. Adjust the reference features to have the similar color and illuminations as those in the degradate features. Args: content_feat (Tensor): The reference feature. style_feat (Tensor): The degradate features. """ size = content_feat.size() style_mean, style_std = calc_mean_std(style_feat) content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size) def adain_color_fix(target: Image, source: Image): # Convert images to tensors to_tensor = ToTensor() target_tensor = to_tensor(target).unsqueeze(0) source_tensor = to_tensor(source).unsqueeze(0) # Apply adaptive instance normalization result_tensor = adaptive_instance_normalization(target_tensor, source_tensor) # Convert tensor back to image to_image = ToPILImage() result_image = to_image(result_tensor.squeeze(0).clamp_(0.0, 1.0)) return result_image
null
2,724
import torch from PIL import Image from torch import Tensor from torch.nn import functional as F from torchvision.transforms import ToTensor, ToPILImage def wavelet_reconstruction(content_feat:Tensor, style_feat:Tensor): def wavelet_color_fix(target: Image, source: Image): # Convert images to tensors to_tensor = ToTensor() target_tensor = to_tensor(target).unsqueeze(0) source_tensor = to_tensor(source).unsqueeze(0) # Apply wavelet reconstruction result_tensor = wavelet_reconstruction(target_tensor, source_tensor) # Convert tensor back to image to_image = ToPILImage() result_image = to_image(result_tensor.squeeze(0).clamp_(0.0, 1.0)) return result_image
null
2,725
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def calculate_psnr(im1, im2, border=0, ycbcr=False): def rgb2ycbcrTorch(im, only_y=True): def batch_PSNR(img, imclean, border=0, ycbcr=False): if ycbcr: img = rgb2ycbcrTorch(img, True) imclean = rgb2ycbcrTorch(imclean, True) Img = img.data.cpu().numpy() Iclean = imclean.data.cpu().numpy() Img = img_as_ubyte(Img) Iclean = img_as_ubyte(Iclean) PSNR = 0 h, w = Iclean.shape[2:] for i in range(Img.shape[0]): PSNR += calculate_psnr(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border) return PSNR
null
2,726
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def calculate_ssim(im1, im2, border=0, ycbcr=False): ''' SSIM the same outputs as MATLAB's im1, im2: h x w x , [0, 255], uint8 ''' if not im1.shape == im2.shape: raise ValueError('Input images must have the same dimensions.') if ycbcr: im1 = rgb2ycbcr(im1, True) im2 = rgb2ycbcr(im2, True) h, w = im1.shape[:2] im1 = im1[border:h-border, border:w-border] im2 = im2[border:h-border, border:w-border] if im1.ndim == 2: return ssim(im1, im2) elif im1.ndim == 3: if im1.shape[2] == 3: ssims = [] for i in range(3): ssims.append(ssim(im1[:,:,i], im2[:,:,i])) return np.array(ssims).mean() elif im1.shape[2] == 1: return ssim(np.squeeze(im1), np.squeeze(im2)) else: raise ValueError('Wrong input image dimensions.') def rgb2ycbcrTorch(im, only_y=True): ''' same as matlab rgb2ycbcr Input: im: float [0,1], N x 3 x H x W only_y: only return Y channel ''' # transform to range [0,255.0] im_temp = im.permute([0,2,3,1]) * 255.0 # N x H x W x C --> N x H x W x C # convert if only_y: rlt = torch.matmul(im_temp, torch.tensor([65.481, 128.553, 24.966], device=im.device, dtype=im.dtype).view([3,1])/ 255.0) + 16.0 else: rlt = torch.matmul(im_temp, torch.tensor([[65.481, -37.797, 112.0 ], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]], device=im.device, dtype=im.dtype)/255.0) + \ torch.tensor([16, 128, 128]).view([-1, 1, 1, 3]) rlt /= 255.0 rlt.clamp_(0.0, 1.0) return rlt.permute([0, 3, 1, 2]) def batch_SSIM(img, imclean, border=0, ycbcr=False): if ycbcr: img = rgb2ycbcrTorch(img, True) imclean = rgb2ycbcrTorch(imclean, True) Img = img.data.cpu().numpy() Iclean = imclean.data.cpu().numpy() Img = img_as_ubyte(Img) Iclean = img_as_ubyte(Iclean) SSIM = 0 for i in range(Img.shape[0]): SSIM += calculate_ssim(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border) return SSIM
null
2,727
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `normalize_np` function. Write a Python function `def normalize_np(im, mean=0.5, std=0.5, reverse=False)` to solve the following problem: Input: im: h x w x c, numpy array Normalize: (im - mean) / std Reverse: im * std + mean Here is the function: def normalize_np(im, mean=0.5, std=0.5, reverse=False): ''' Input: im: h x w x c, numpy array Normalize: (im - mean) / std Reverse: im * std + mean ''' if not isinstance(mean, (list, tuple)): mean = [mean, ] * im.shape[2] mean = np.array(mean).reshape([1, 1, im.shape[2]]) if not isinstance(std, (list, tuple)): std = [std, ] * im.shape[2] std = np.array(std).reshape([1, 1, im.shape[2]]) if not reverse: out = (im.astype(np.float32) - mean) / std else: out = im.astype(np.float32) * std + mean return out
Input: im: h x w x c, numpy array Normalize: (im - mean) / std Reverse: im * std + mean
2,728
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `normalize_th` function. Write a Python function `def normalize_th(im, mean=0.5, std=0.5, reverse=False)` to solve the following problem: Input: im: b x c x h x w, torch tensor Normalize: (im - mean) / std Reverse: im * std + mean Here is the function: def normalize_th(im, mean=0.5, std=0.5, reverse=False): ''' Input: im: b x c x h x w, torch tensor Normalize: (im - mean) / std Reverse: im * std + mean ''' if not isinstance(mean, (list, tuple)): mean = [mean, ] * im.shape[1] mean = torch.tensor(mean, device=im.device).view([1, im.shape[1], 1, 1]) if not isinstance(std, (list, tuple)): std = [std, ] * im.shape[1] std = torch.tensor(std, device=im.device).view([1, im.shape[1], 1, 1]) if not reverse: out = (im - mean) / std else: out = im * std + mean return out
Input: im: b x c x h x w, torch tensor Normalize: (im - mean) / std Reverse: im * std + mean
2,729
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `tensor2img` function. Write a Python function `def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1))` to solve the following problem: Convert torch Tensors into image numpy arrays. After clamping to [min, max], values will be normalized to [0, 1]. Args: tensor (Tensor or list[Tensor]): Accept shapes: 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); 2) 3D Tensor of shape (3/1 x H x W); 3) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. rgb2bgr (bool): Whether to change rgb to bgr. out_type (numpy type): output types. If ``np.uint8``, transform outputs to uint8 type with range [0, 255]; otherwise, float type with range [0, 1]. Default: ``np.uint8``. min_max (tuple[int]): min and max values for clamp. Returns: (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of shape (H x W). The channel order is BGR. Here is the function: def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): """Convert torch Tensors into image numpy arrays. After clamping to [min, max], values will be normalized to [0, 1]. Args: tensor (Tensor or list[Tensor]): Accept shapes: 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); 2) 3D Tensor of shape (3/1 x H x W); 3) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. rgb2bgr (bool): Whether to change rgb to bgr. out_type (numpy type): output types. If ``np.uint8``, transform outputs to uint8 type with range [0, 255]; otherwise, float type with range [0, 1]. Default: ``np.uint8``. min_max (tuple[int]): min and max values for clamp. Returns: (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of shape (H x W). The channel order is BGR. """ if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') flag_tensor = torch.is_tensor(tensor) if flag_tensor: tensor = [tensor] result = [] for _tensor in tensor: _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) n_dim = _tensor.dim() if n_dim == 4: img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() img_np = img_np.transpose(1, 2, 0) if rgb2bgr: img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) elif n_dim == 3: img_np = _tensor.numpy() img_np = img_np.transpose(1, 2, 0) if img_np.shape[2] == 1: # gray image img_np = np.squeeze(img_np, axis=2) else: if rgb2bgr: img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) elif n_dim == 2: img_np = _tensor.numpy() else: raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}') if out_type == np.uint8: # Unlike MATLAB, numpy.unit8() WILL NOT round by default. img_np = (img_np * 255.0).round() img_np = img_np.astype(out_type) result.append(img_np) if len(result) == 1 and flag_tensor: result = result[0] return result
Convert torch Tensors into image numpy arrays. After clamping to [min, max], values will be normalized to [0, 1]. Args: tensor (Tensor or list[Tensor]): Accept shapes: 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); 2) 3D Tensor of shape (3/1 x H x W); 3) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. rgb2bgr (bool): Whether to change rgb to bgr. out_type (numpy type): output types. If ``np.uint8``, transform outputs to uint8 type with range [0, 255]; otherwise, float type with range [0, 1]. Default: ``np.uint8``. min_max (tuple[int]): min and max values for clamp. Returns: (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of shape (H x W). The channel order is BGR.
2,730
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `img2tensor` function. Write a Python function `def img2tensor(imgs, out_type=torch.float32)` to solve the following problem: Convert image numpy arrays into torch tensor. Args: imgs (Array or list[array]): Accept shapes: 3) list of numpy arrays 1) 3D numpy array of shape (H x W x 3/1); 2) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. Returns: (array or list): 4D ndarray of shape (1 x C x H x W) Here is the function: def img2tensor(imgs, out_type=torch.float32): """Convert image numpy arrays into torch tensor. Args: imgs (Array or list[array]): Accept shapes: 3) list of numpy arrays 1) 3D numpy array of shape (H x W x 3/1); 2) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. Returns: (array or list): 4D ndarray of shape (1 x C x H x W) """ def _img2tensor(img): if img.ndim == 2: tensor = torch.from_numpy(img[None, None,]).type(out_type) elif img.ndim == 3: tensor = torch.from_numpy(rearrange(img, 'h w c -> c h w')).type(out_type).unsqueeze(0) else: raise TypeError(f'2D or 3D numpy array expected, got{img.ndim}D array') return tensor if not (isinstance(imgs, np.ndarray) or (isinstance(imgs, list) and all(isinstance(t, np.ndarray) for t in imgs))): raise TypeError(f'Numpy array or list of numpy array expected, got {type(imgs)}') flag_numpy = isinstance(imgs, np.ndarray) if flag_numpy: imgs = [imgs,] result = [] for _img in imgs: result.append(_img2tensor(_img)) if len(result) == 1 and flag_numpy: result = result[0] return result
Convert image numpy arrays into torch tensor. Args: imgs (Array or list[array]): Accept shapes: 3) list of numpy arrays 1) 3D numpy array of shape (H x W x 3/1); 2) 2D Tensor of shape (H x W). Tensor channel should be in RGB order. Returns: (array or list): 4D ndarray of shape (1 x C x H x W)
2,731
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def bgr2rgb(im): return cv2.cvtColor(im, cv2.COLOR_BGR2RGB) The provided code snippet includes necessary dependencies for implementing the `imread` function. Write a Python function `def imread(path, chn='rgb', dtype='float32')` to solve the following problem: Read image. chn: 'rgb', 'bgr' or 'gray' out: im: h x w x c, numpy tensor Here is the function: def imread(path, chn='rgb', dtype='float32'): ''' Read image. chn: 'rgb', 'bgr' or 'gray' out: im: h x w x c, numpy tensor ''' im = cv2.imread(str(path), cv2.IMREAD_UNCHANGED) # BGR, uint8 try: if chn.lower() == 'rgb': if im.ndim == 3: im = bgr2rgb(im) else: im = np.stack((im, im, im), axis=2) elif chn.lower() == 'gray': assert im.ndim == 2 except: print(str(path)) if dtype == 'float32': im = im.astype(np.float32) / 255. elif dtype == 'float64': im = im.astype(np.float64) / 255. elif dtype == 'uint8': pass else: sys.exit('Please input corrected dtype: float32, float64 or uint8!') return im
Read image. chn: 'rgb', 'bgr' or 'gray' out: im: h x w x c, numpy tensor
2,732
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def rgb2bgr(im): return cv2.cvtColor(im, cv2.COLOR_RGB2BGR) The provided code snippet includes necessary dependencies for implementing the `imwrite` function. Write a Python function `def imwrite(im_in, path, chn='rgb', dtype_in='float32', qf=None)` to solve the following problem: Save image. Input: im: h x w x c, numpy tensor path: the saving path chn: the channel order of the im, Here is the function: def imwrite(im_in, path, chn='rgb', dtype_in='float32', qf=None): ''' Save image. Input: im: h x w x c, numpy tensor path: the saving path chn: the channel order of the im, ''' im = im_in.copy() if isinstance(path, str): path = Path(path) if dtype_in != 'uint8': im = img_as_ubyte(im) if chn.lower() == 'rgb' and im.ndim == 3: im = rgb2bgr(im) if qf is not None and path.suffix.lower() in ['.jpg', '.jpeg']: flag = cv2.imwrite(str(path), im, [int(cv2.IMWRITE_JPEG_QUALITY), int(qf)]) else: flag = cv2.imwrite(str(path), im) return flag
Save image. Input: im: h x w x c, numpy tensor path: the saving path chn: the channel order of the im,
2,733
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def bgr2rgb(im): return cv2.cvtColor(im, cv2.COLOR_BGR2RGB) def rgb2bgr(im): return cv2.cvtColor(im, cv2.COLOR_RGB2BGR) The provided code snippet includes necessary dependencies for implementing the `jpeg_compress` function. Write a Python function `def jpeg_compress(im, qf, chn_in='rgb')` to solve the following problem: Input: im: h x w x 3 array qf: compress factor, (0, 100] chn_in: 'rgb' or 'bgr' Return: Compressed Image with channel order: chn_in Here is the function: def jpeg_compress(im, qf, chn_in='rgb'): ''' Input: im: h x w x 3 array qf: compress factor, (0, 100] chn_in: 'rgb' or 'bgr' Return: Compressed Image with channel order: chn_in ''' # transform to BGR channle and uint8 data type im_bgr = rgb2bgr(im) if chn_in.lower() == 'rgb' else im if im.dtype != np.dtype('uint8'): im_bgr = img_as_ubyte(im_bgr) # JPEG compress flag, encimg = cv2.imencode('.jpg', im_bgr, [int(cv2.IMWRITE_JPEG_QUALITY), qf]) assert flag im_jpg_bgr = cv2.imdecode(encimg, 1) # uint8, BGR # transform back to original channel and the original data type im_out = bgr2rgb(im_jpg_bgr) if chn_in.lower() == 'rgb' else im_jpg_bgr if im.dtype != np.dtype('uint8'): im_out = img_as_float32(im_out).astype(im.dtype) return im_out
Input: im: h x w x 3 array qf: compress factor, (0, 100] chn_in: 'rgb' or 'bgr' Return: Compressed Image with channel order: chn_in
2,734
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `data_aug_np` function. Write a Python function `def data_aug_np(image, mode)` to solve the following problem: Performs data augmentation of the input image Input: image: a cv2 (OpenCV) image mode: int. Choice of transformation to apply to the image 0 - no transformation 1 - flip up and down 2 - rotate counterwise 90 degree 3 - rotate 90 degree and flip up and down 4 - rotate 180 degree 5 - rotate 180 degree and flip 6 - rotate 270 degree 7 - rotate 270 degree and flip Here is the function: def data_aug_np(image, mode): ''' Performs data augmentation of the input image Input: image: a cv2 (OpenCV) image mode: int. Choice of transformation to apply to the image 0 - no transformation 1 - flip up and down 2 - rotate counterwise 90 degree 3 - rotate 90 degree and flip up and down 4 - rotate 180 degree 5 - rotate 180 degree and flip 6 - rotate 270 degree 7 - rotate 270 degree and flip ''' if mode == 0: # original out = image elif mode == 1: # flip up and down out = np.flipud(image) elif mode == 2: # rotate counterwise 90 degree out = np.rot90(image) elif mode == 3: # rotate 90 degree and flip up and down out = np.rot90(image) out = np.flipud(out) elif mode == 4: # rotate 180 degree out = np.rot90(image, k=2) elif mode == 5: # rotate 180 degree and flip out = np.rot90(image, k=2) out = np.flipud(out) elif mode == 6: # rotate 270 degree out = np.rot90(image, k=3) elif mode == 7: # rotate 270 degree and flip out = np.rot90(image, k=3) out = np.flipud(out) else: raise Exception('Invalid choice of image transformation') return out.copy()
Performs data augmentation of the input image Input: image: a cv2 (OpenCV) image mode: int. Choice of transformation to apply to the image 0 - no transformation 1 - flip up and down 2 - rotate counterwise 90 degree 3 - rotate 90 degree and flip up and down 4 - rotate 180 degree 5 - rotate 180 degree and flip 6 - rotate 270 degree 7 - rotate 270 degree and flip
2,735
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `inverse_data_aug_np` function. Write a Python function `def inverse_data_aug_np(image, mode)` to solve the following problem: Performs inverse data augmentation of the input image Here is the function: def inverse_data_aug_np(image, mode): ''' Performs inverse data augmentation of the input image ''' if mode == 0: # original out = image elif mode == 1: out = np.flipud(image) elif mode == 2: out = np.rot90(image, axes=(1,0)) elif mode == 3: out = np.flipud(image) out = np.rot90(out, axes=(1,0)) elif mode == 4: out = np.rot90(image, k=2, axes=(1,0)) elif mode == 5: out = np.flipud(image) out = np.rot90(out, k=2, axes=(1,0)) elif mode == 6: out = np.rot90(image, k=3, axes=(1,0)) elif mode == 7: # rotate 270 degree and flip out = np.flipud(image) out = np.rot90(out, k=3, axes=(1,0)) else: raise Exception('Invalid choice of image transformation') return out
Performs inverse data augmentation of the input image
2,736
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def imshow(x, title=None, cbar=False): import matplotlib.pyplot as plt plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') if title: plt.title(title) if cbar: plt.colorbar() plt.show()
null
2,737
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `imgrad` function. Write a Python function `def imgrad(im, pading_mode='mirror')` to solve the following problem: Calculate image gradient. Input: im: h x w x c numpy array Here is the function: def imgrad(im, pading_mode='mirror'): ''' Calculate image gradient. Input: im: h x w x c numpy array ''' from scipy.ndimage import correlate # lazy import wx = np.array([[0, 0, 0], [-1, 1, 0], [0, 0, 0]], dtype=np.float32) wy = np.array([[0, -1, 0], [0, 1, 0], [0, 0, 0]], dtype=np.float32) if im.ndim == 3: gradx = np.stack( [correlate(im[:,:,c], wx, mode=pading_mode) for c in range(im.shape[2])], axis=2 ) grady = np.stack( [correlate(im[:,:,c], wy, mode=pading_mode) for c in range(im.shape[2])], axis=2 ) grad = np.concatenate((gradx, grady), axis=2) else: gradx = correlate(im, wx, mode=pading_mode) grady = correlate(im, wy, mode=pading_mode) grad = np.stack((gradx, grady), axis=2) return {'gradx': gradx, 'grady': grady, 'grad':grad}
Calculate image gradient. Input: im: h x w x c numpy array
2,738
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 def convfft(im, weight): ''' Convolution with FFT Input: im: h1 x w1 x c numpy array weight: h2 x w2 numpy array Output: out: h1 x w1 x c numpy array ''' axes = (0,1) otf = psf2otf(weight, im.shape[:2]) if im.ndim == 3: otf = np.tile(otf[:, :, None], (1,1,im.shape[2])) out = fft.ifft2(fft.fft2(im, axes=axes) * otf, axes=axes).real return out The provided code snippet includes necessary dependencies for implementing the `imgrad_fft` function. Write a Python function `def imgrad_fft(im)` to solve the following problem: Calculate image gradient. Input: im: h x w x c numpy array Here is the function: def imgrad_fft(im): ''' Calculate image gradient. Input: im: h x w x c numpy array ''' wx = np.rot90(np.array([[0, 0, 0], [-1, 1, 0], [0, 0, 0]], dtype=np.float32), k=2) gradx = convfft(im, wx) wy = np.rot90(np.array([[0, -1, 0], [0, 1, 0], [0, 0, 0]], dtype=np.float32), k=2) grady = convfft(im, wy) grad = np.concatenate((gradx, grady), axis=2) return {'gradx': gradx, 'grady': grady, 'grad':grad}
Calculate image gradient. Input: im: h x w x c numpy array
2,739
import sys import cv2 import math import torch import random import numpy as np from scipy import fft from pathlib import Path from einops import rearrange from skimage import img_as_ubyte, img_as_float32 The provided code snippet includes necessary dependencies for implementing the `random_crop` function. Write a Python function `def random_crop(im, pch_size)` to solve the following problem: Randomly crop a patch from the give image. Here is the function: def random_crop(im, pch_size): ''' Randomly crop a patch from the give image. ''' h, w = im.shape[:2] if h == pch_size and w == pch_size: im_pch = im else: assert h >= pch_size or w >= pch_size ind_h = random.randint(0, h-pch_size) ind_w = random.randint(0, w-pch_size) im_pch = im[ind_h:ind_h+pch_size, ind_w:ind_w+pch_size,] return im_pch
Randomly crop a patch from the give image.
2,740
import torch from torch import nn import torch.nn.functional as F from einops import repeat from taming.modules.discriminator.model import NLayerDiscriminator, weights_init from taming.modules.losses.lpips import LPIPS from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss def hinge_d_loss_with_exemplar_weights(logits_real, logits_fake, weights): assert weights.shape[0] == logits_real.shape[0] == logits_fake.shape[0] loss_real = torch.mean(F.relu(1. - logits_real), dim=[1,2,3]) loss_fake = torch.mean(F.relu(1. + logits_fake), dim=[1,2,3]) loss_real = (weights * loss_real).sum() / weights.sum() loss_fake = (weights * loss_fake).sum() / weights.sum() d_loss = 0.5 * (loss_real + loss_fake) return d_loss
null
2,741
import torch from torch import nn import torch.nn.functional as F from einops import repeat from taming.modules.discriminator.model import NLayerDiscriminator, weights_init from taming.modules.losses.lpips import LPIPS from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss def adopt_weight(weight, global_step, threshold=0, value=0.): if global_step < threshold: weight = value return weight
null
2,742
import torch from torch import nn import torch.nn.functional as F from einops import repeat from taming.modules.discriminator.model import NLayerDiscriminator, weights_init from taming.modules.losses.lpips import LPIPS from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss def measure_perplexity(predicted_indices, n_embed): # src: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py # eval cluster perplexity. when perplexity == num_embeddings then all clusters are used exactly equally encodings = F.one_hot(predicted_indices, n_embed).float().reshape(-1, n_embed) avg_probs = encodings.mean(0) perplexity = (-(avg_probs * torch.log(avg_probs + 1e-10)).sum()).exp() cluster_use = torch.sum(avg_probs > 0) return perplexity, cluster_use
null
2,743
import torch from torch import nn import torch.nn.functional as F from einops import repeat from taming.modules.discriminator.model import NLayerDiscriminator, weights_init from taming.modules.losses.lpips import LPIPS from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss def l1(x, y): return torch.abs(x-y)
null
2,744
import torch from torch import nn import torch.nn.functional as F from einops import repeat from taming.modules.discriminator.model import NLayerDiscriminator, weights_init from taming.modules.losses.lpips import LPIPS from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss def l2(x, y): return torch.pow((x-y), 2)
null
2,745
import math import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as checkpoint from timm.models.layers import DropPath, to_2tuple, trunc_normal_ The provided code snippet includes necessary dependencies for implementing the `window_partition` function. Write a Python function `def window_partition(x, window_size)` to solve the following problem: Args: x: (B, H, W, C) window_size (int): window size Returns: windows: (num_windows*B, window_size, window_size, C) Here is the function: def window_partition(x, window_size): """ Args: x: (B, H, W, C) window_size (int): window size Returns: windows: (num_windows*B, window_size, window_size, C) """ B, H, W, C = x.shape x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) return windows
Args: x: (B, H, W, C) window_size (int): window size Returns: windows: (num_windows*B, window_size, window_size, C)
2,746
import math import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as checkpoint from timm.models.layers import DropPath, to_2tuple, trunc_normal_ The provided code snippet includes necessary dependencies for implementing the `window_reverse` function. Write a Python function `def window_reverse(windows, window_size, H, W)` to solve the following problem: Args: windows: (num_windows*B, window_size, window_size, C) window_size (int): Window size H (int): Height of image W (int): Width of image Returns: x: (B, H, W, C) Here is the function: def window_reverse(windows, window_size, H, W): """ Args: windows: (num_windows*B, window_size, window_size, C) window_size (int): Window size H (int): Height of image W (int): Width of image Returns: x: (B, H, W, C) """ B = int(windows.shape[0] / (H * W / window_size / window_size)) x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) return x
Args: windows: (num_windows*B, window_size, window_size, C) window_size (int): Window size H (int): Height of image W (int): Width of image Returns: x: (B, H, W, C)
2,747
import re import torch import torch.nn as nn import torch.nn.functional as F import torch.nn.utils.spectral_norm as spectral_norm from ldm.modules.diffusionmodules.util import normalization def get_nonspade_norm_layer(opt, norm_type='instance'): # helper function to get # output channels of the previous layer def get_out_channel(layer): if hasattr(layer, 'out_channels'): return getattr(layer, 'out_channels') return layer.weight.size(0) # this function will be returned def add_norm_layer(layer): nonlocal norm_type if norm_type.startswith('spectral'): layer = spectral_norm(layer) subnorm_type = norm_type[len('spectral'):] if subnorm_type == 'none' or len(subnorm_type) == 0: return layer # remove bias in the previous layer, which is meaningless # since it has no effect after normalization if getattr(layer, 'bias', None) is not None: delattr(layer, 'bias') layer.register_parameter('bias', None) if subnorm_type == 'batch': norm_layer = nn.BatchNorm2d(get_out_channel(layer), affine=True) elif subnorm_type == 'sync_batch': norm_layer = SynchronizedBatchNorm2d(get_out_channel(layer), affine=True) elif subnorm_type == 'instance': norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False) else: raise ValueError('normalization layer %s is not recognized' % subnorm_type) return nn.Sequential(layer, norm_layer) return add_norm_layer
null
2,748
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): if schedule == "linear": betas = ( torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 ) elif schedule == "cosine": timesteps = ( torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s ) alphas = timesteps / (1 + cosine_s) * np.pi / 2 alphas = torch.cos(alphas).pow(2) alphas = alphas / alphas[0] betas = 1 - alphas[1:] / alphas[:-1] betas = np.clip(betas, a_min=0, a_max=0.999) elif schedule == "sqrt_linear": betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) elif schedule == "sqrt": betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 else: raise ValueError(f"schedule '{schedule}' unknown.") return betas.numpy()
null
2,749
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): if ddim_discr_method == 'uniform': c = num_ddpm_timesteps // num_ddim_timesteps ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) elif ddim_discr_method == 'quad': ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) else: raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') # assert ddim_timesteps.shape[0] == num_ddim_timesteps # add one to get the final alpha values right (the ones from first scale to data during sampling) steps_out = ddim_timesteps if verbose: print(f'Selected timesteps for ddim sampler: {steps_out}') return steps_out
null
2,750
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): # select alphas for computing the variance schedule alphas = alphacums[ddim_timesteps] alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) # according the the formula provided in https://arxiv.org/abs/2010.02502 sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) if verbose: print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') print(f'For the chosen value of eta, which is {eta}, ' f'this results in the following sigma_t schedule for ddim sampler {sigmas}') return sigmas, alphas, alphas_prev
null
2,751
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `betas_for_alpha_bar` function. Write a Python function `def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999)` to solve the following problem: Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of (1-beta) over time from t = [0,1]. :param num_diffusion_timesteps: the number of betas to produce. :param alpha_bar: a lambda that takes an argument t from 0 to 1 and produces the cumulative product of (1-beta) up to that part of the diffusion process. :param max_beta: the maximum beta to use; use values lower than 1 to prevent singularities. Here is the function: def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): """ Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of (1-beta) over time from t = [0,1]. :param num_diffusion_timesteps: the number of betas to produce. :param alpha_bar: a lambda that takes an argument t from 0 to 1 and produces the cumulative product of (1-beta) up to that part of the diffusion process. :param max_beta: the maximum beta to use; use values lower than 1 to prevent singularities. """ betas = [] for i in range(num_diffusion_timesteps): t1 = i / num_diffusion_timesteps t2 = (i + 1) / num_diffusion_timesteps betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) return np.array(betas)
Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of (1-beta) over time from t = [0,1]. :param num_diffusion_timesteps: the number of betas to produce. :param alpha_bar: a lambda that takes an argument t from 0 to 1 and produces the cumulative product of (1-beta) up to that part of the diffusion process. :param max_beta: the maximum beta to use; use values lower than 1 to prevent singularities.
2,752
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config def extract_into_tensor(a, t, x_shape): b, *_ = t.shape out = a.gather(-1, t) return out.reshape(b, *((1,) * (len(x_shape) - 1)))
null
2,753
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config class CheckpointFunction(torch.autograd.Function): def forward(ctx, run_function, length, *args): ctx.run_function = run_function ctx.input_tensors = list(args[:length]) ctx.input_params = list(args[length:]) with torch.no_grad(): output_tensors = ctx.run_function(*ctx.input_tensors) return output_tensors def backward(ctx, *output_grads): ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] with torch.enable_grad(): # Fixes a bug where the first op in run_function modifies the # Tensor storage in place, which is not allowed for detach()'d # Tensors. shallow_copies = [x.view_as(x) for x in ctx.input_tensors] output_tensors = ctx.run_function(*shallow_copies) input_grads = torch.autograd.grad( output_tensors, ctx.input_tensors + ctx.input_params, output_grads, allow_unused=True, ) del ctx.input_tensors del ctx.input_params del output_tensors return (None, None) + input_grads The provided code snippet includes necessary dependencies for implementing the `checkpoint` function. Write a Python function `def checkpoint(func, inputs, params, flag)` to solve the following problem: Evaluate a function without caching intermediate activations, allowing for reduced memory at the expense of extra compute in the backward pass. :param func: the function to evaluate. :param inputs: the argument sequence to pass to `func`. :param params: a sequence of parameters `func` depends on but does not explicitly take as arguments. :param flag: if False, disable gradient checkpointing. Here is the function: def checkpoint(func, inputs, params, flag): """ Evaluate a function without caching intermediate activations, allowing for reduced memory at the expense of extra compute in the backward pass. :param func: the function to evaluate. :param inputs: the argument sequence to pass to `func`. :param params: a sequence of parameters `func` depends on but does not explicitly take as arguments. :param flag: if False, disable gradient checkpointing. """ if flag: args = tuple(inputs) + tuple(params) return CheckpointFunction.apply(func, len(inputs), *args) else: return func(*inputs)
Evaluate a function without caching intermediate activations, allowing for reduced memory at the expense of extra compute in the backward pass. :param func: the function to evaluate. :param inputs: the argument sequence to pass to `func`. :param params: a sequence of parameters `func` depends on but does not explicitly take as arguments. :param flag: if False, disable gradient checkpointing.
2,754
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `timestep_embedding` function. Write a Python function `def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False)` to solve the following problem: Create sinusoidal timestep embeddings. :param timesteps: a 1-D Tensor of N indices, one per batch element. These may be fractional. :param dim: the dimension of the output. :param max_period: controls the minimum frequency of the embeddings. :return: an [N x dim] Tensor of positional embeddings. Here is the function: def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): """ Create sinusoidal timestep embeddings. :param timesteps: a 1-D Tensor of N indices, one per batch element. These may be fractional. :param dim: the dimension of the output. :param max_period: controls the minimum frequency of the embeddings. :return: an [N x dim] Tensor of positional embeddings. """ if not repeat_only: half = dim // 2 freqs = torch.exp( -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half ).to(device=timesteps.device) args = timesteps[:, None].float() * freqs[None] embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) if dim % 2: embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) else: embedding = repeat(timesteps, 'b -> b d', d=dim) return embedding
Create sinusoidal timestep embeddings. :param timesteps: a 1-D Tensor of N indices, one per batch element. These may be fractional. :param dim: the dimension of the output. :param max_period: controls the minimum frequency of the embeddings. :return: an [N x dim] Tensor of positional embeddings.
2,755
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `zero_module` function. Write a Python function `def zero_module(module)` to solve the following problem: Zero out the parameters of a module and return it. Here is the function: def zero_module(module): """ Zero out the parameters of a module and return it. """ for p in module.parameters(): p.detach().zero_() return module
Zero out the parameters of a module and return it.
2,756
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `scale_module` function. Write a Python function `def scale_module(module, scale)` to solve the following problem: Scale the parameters of a module and return it. Here is the function: def scale_module(module, scale): """ Scale the parameters of a module and return it. """ for p in module.parameters(): p.detach().mul_(scale) return module
Scale the parameters of a module and return it.
2,757
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `mean_flat` function. Write a Python function `def mean_flat(tensor)` to solve the following problem: Take the mean over all non-batch dimensions. Here is the function: def mean_flat(tensor): """ Take the mean over all non-batch dimensions. """ return tensor.mean(dim=list(range(1, len(tensor.shape))))
Take the mean over all non-batch dimensions.
2,758
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config class GroupNorm32(nn.GroupNorm): def forward(self, x): return super().forward(x.float()).type(x.dtype) The provided code snippet includes necessary dependencies for implementing the `normalization` function. Write a Python function `def normalization(channels, norm_channel=32)` to solve the following problem: Make a standard normalization layer. :param channels: number of input channels. :return: an nn.Module for normalization. Here is the function: def normalization(channels, norm_channel=32): """ Make a standard normalization layer. :param channels: number of input channels. :return: an nn.Module for normalization. """ return GroupNorm32(norm_channel, channels)
Make a standard normalization layer. :param channels: number of input channels. :return: an nn.Module for normalization.
2,759
import os import math import torch import torch.nn as nn import numpy as np from einops import repeat from ldm.util import instantiate_from_config The provided code snippet includes necessary dependencies for implementing the `conv_nd` function. Write a Python function `def conv_nd(dims, *args, **kwargs)` to solve the following problem: Create a 1D, 2D, or 3D convolution module. Here is the function: def conv_nd(dims, *args, **kwargs): """ Create a 1D, 2D, or 3D convolution module. """ if dims == 1: return nn.Conv1d(*args, **kwargs) elif dims == 2: return nn.Conv2d(*args, **kwargs) elif dims == 3: return nn.Conv3d(*args, **kwargs) raise ValueError(f"unsupported dimensions: {dims}")
Create a 1D, 2D, or 3D convolution module.