signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def mixing_ratio_from_relative_humidity(relative_humidity, temperature, pressure):
return (relative_humidity<EOL>* saturation_mixing_ratio(pressure, temperature)).to('<STR_LIT>')<EOL>
r"""Calculate the mixing ratio from relative humidity, temperature, and pressure. Parameters ---------- relative_humidity: array_like The relative humidity expressed as a unitless ratio in the range [0, 1]. Can also pass a percentage if proper units are attached. temperature: `pint.Quantity` Air temperature pressure: `pint.Quantity` Total atmospheric pressure Returns ------- `pint.Quantity` Dimensionless mixing ratio Notes ----- Formula adapted from [Hobbs1977]_ pg. 74. .. math:: w = (RH)(w_s) * :math:`w` is mixing ratio * :math:`RH` is relative humidity as a unitless ratio * :math:`w_s` is the saturation mixing ratio See Also -------- relative_humidity_from_mixing_ratio, saturation_mixing_ratio
f8476:m26
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def relative_humidity_from_mixing_ratio(mixing_ratio, temperature, pressure):
return mixing_ratio / saturation_mixing_ratio(pressure, temperature)<EOL>
r"""Calculate the relative humidity from mixing ratio, temperature, and pressure. Parameters ---------- mixing_ratio: `pint.Quantity` Dimensionless mass mixing ratio temperature: `pint.Quantity` Air temperature pressure: `pint.Quantity` Total atmospheric pressure Returns ------- `pint.Quantity` Relative humidity Notes ----- Formula based on that from [Hobbs1977]_ pg. 74. .. math:: RH = \frac{w}{w_s} * :math:`RH` is relative humidity as a unitless ratio * :math:`w` is mixing ratio * :math:`w_s` is the saturation mixing ratio See Also -------- mixing_ratio_from_relative_humidity, saturation_mixing_ratio
f8476:m27
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def mixing_ratio_from_specific_humidity(specific_humidity):
try:<EOL><INDENT>specific_humidity = specific_humidity.to('<STR_LIT>')<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT>return specific_humidity / (<NUM_LIT:1> - specific_humidity)<EOL>
r"""Calculate the mixing ratio from specific humidity. Parameters ---------- specific_humidity: `pint.Quantity` Specific humidity of air Returns ------- `pint.Quantity` Mixing ratio Notes ----- Formula from [Salby1996]_ pg. 118. .. math:: w = \frac{q}{1-q} * :math:`w` is mixing ratio * :math:`q` is the specific humidity See Also -------- mixing_ratio, specific_humidity_from_mixing_ratio
f8476:m28
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def specific_humidity_from_mixing_ratio(mixing_ratio):
try:<EOL><INDENT>mixing_ratio = mixing_ratio.to('<STR_LIT>')<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT>return mixing_ratio / (<NUM_LIT:1> + mixing_ratio)<EOL>
r"""Calculate the specific humidity from the mixing ratio. Parameters ---------- mixing_ratio: `pint.Quantity` mixing ratio Returns ------- `pint.Quantity` Specific humidity Notes ----- Formula from [Salby1996]_ pg. 118. .. math:: q = \frac{w}{1+w} * :math:`w` is mixing ratio * :math:`q` is the specific humidity See Also -------- mixing_ratio, mixing_ratio_from_specific_humidity
f8476:m29
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def relative_humidity_from_specific_humidity(specific_humidity, temperature, pressure):
return (mixing_ratio_from_specific_humidity(specific_humidity)<EOL>/ saturation_mixing_ratio(pressure, temperature))<EOL>
r"""Calculate the relative humidity from specific humidity, temperature, and pressure. Parameters ---------- specific_humidity: `pint.Quantity` Specific humidity of air temperature: `pint.Quantity` Air temperature pressure: `pint.Quantity` Total atmospheric pressure Returns ------- `pint.Quantity` Relative humidity Notes ----- Formula based on that from [Hobbs1977]_ pg. 74. and [Salby1996]_ pg. 118. .. math:: RH = \frac{q}{(1-q)w_s} * :math:`RH` is relative humidity as a unitless ratio * :math:`q` is specific humidity * :math:`w_s` is the saturation mixing ratio See Also -------- relative_humidity_from_mixing_ratio
f8476:m30
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def cape_cin(pressure, temperature, dewpt, parcel_profile):
<EOL>lfc_pressure, _ = lfc(pressure, temperature, dewpt,<EOL>parcel_temperature_profile=parcel_profile)<EOL>if np.isnan(lfc_pressure):<EOL><INDENT>return <NUM_LIT:0> * units('<STR_LIT>'), <NUM_LIT:0> * units('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>lfc_pressure = lfc_pressure.magnitude<EOL><DEDENT>el_pressure, _ = el(pressure, temperature, dewpt,<EOL>parcel_temperature_profile=parcel_profile)<EOL>if np.isnan(el_pressure):<EOL><INDENT>el_pressure = pressure[-<NUM_LIT:1>].magnitude<EOL><DEDENT>else:<EOL><INDENT>el_pressure = el_pressure.magnitude<EOL><DEDENT>y = (parcel_profile - temperature).to(units.degK)<EOL>x, y = _find_append_zero_crossings(np.copy(pressure), y)<EOL>p_mask = _less_or_close(x, lfc_pressure) & _greater_or_close(x, el_pressure)<EOL>x_clipped = x[p_mask]<EOL>y_clipped = y[p_mask]<EOL>cape = (mpconsts.Rd<EOL>* (np.trapz(y_clipped, np.log(x_clipped)) * units.degK)).to(units('<STR_LIT>'))<EOL>p_mask = _greater_or_close(x, lfc_pressure)<EOL>x_clipped = x[p_mask]<EOL>y_clipped = y[p_mask]<EOL>cin = (mpconsts.Rd<EOL>* (np.trapz(y_clipped, np.log(x_clipped)) * units.degK)).to(units('<STR_LIT>'))<EOL>return cape, cin<EOL>
r"""Calculate CAPE and CIN. Calculate the convective available potential energy (CAPE) and convective inhibition (CIN) of a given upper air profile and parcel path. CIN is integrated between the surface and LFC, CAPE is integrated between the LFC and EL (or top of sounding). Intersection points of the measured temperature profile and parcel profile are linearly interpolated. Parameters ---------- pressure : `pint.Quantity` The atmospheric pressure level(s) of interest. The first entry should be the starting point pressure. temperature : `pint.Quantity` The atmospheric temperature corresponding to pressure. dewpt : `pint.Quantity` The atmospheric dew point corresponding to pressure. parcel_profile : `pint.Quantity` The temperature profile of the parcel Returns ------- `pint.Quantity` Convective available potential energy (CAPE). `pint.Quantity` Convective inhibition (CIN). Notes ----- Formula adopted from [Hobbs1977]_. .. math:: \text{CAPE} = -R_d \int_{LFC}^{EL} (T_{parcel} - T_{env}) d\text{ln}(p) .. math:: \text{CIN} = -R_d \int_{SFC}^{LFC} (T_{parcel} - T_{env}) d\text{ln}(p) * :math:`CAPE` Convective available potential energy * :math:`CIN` Convective inhibition * :math:`LFC` Pressure of the level of free convection * :math:`EL` Pressure of the equilibrium level * :math:`SFC` Level of the surface or beginning of parcel path * :math:`R_d` Gas constant * :math:`g` Gravitational acceleration * :math:`T_{parcel}` Parcel temperature * :math:`T_{env}` Environment temperature * :math:`p` Atmospheric pressure See Also -------- lfc, el
f8476:m31
def _find_append_zero_crossings(x, y):
<EOL>crossings = find_intersections(x[<NUM_LIT:1>:], y[<NUM_LIT:1>:], np.zeros_like(y[<NUM_LIT:1>:]) * y.units)<EOL>x = concatenate((x, crossings[<NUM_LIT:0>]))<EOL>y = concatenate((y, crossings[<NUM_LIT:1>]))<EOL>sort_idx = np.argsort(x)<EOL>x = x[sort_idx]<EOL>y = y[sort_idx]<EOL>keep_idx = np.ediff1d(x, to_end=[<NUM_LIT:1>]) > <NUM_LIT:0><EOL>x = x[keep_idx]<EOL>y = y[keep_idx]<EOL>return x, y<EOL>
r""" Find and interpolate zero crossings. Estimate the zero crossings of an x,y series and add estimated crossings to series, returning a sorted array with no duplicate values. Parameters ---------- x : `pint.Quantity` x values of data y : `pint.Quantity` y values of data Returns ------- x : `pint.Quantity` x values of data y : `pint.Quantity` y values of data
f8476:m32
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def most_unstable_parcel(pressure, temperature, dewpoint, heights=None,<EOL>bottom=None, depth=<NUM_LIT> * units.hPa):
p_layer, t_layer, td_layer = get_layer(pressure, temperature, dewpoint, bottom=bottom,<EOL>depth=depth, heights=heights, interpolate=False)<EOL>theta_e = equivalent_potential_temperature(p_layer, t_layer, td_layer)<EOL>max_idx = np.argmax(theta_e)<EOL>return p_layer[max_idx], t_layer[max_idx], td_layer[max_idx], max_idx<EOL>
Determine the most unstable parcel in a layer. Determines the most unstable parcel of air by calculating the equivalent potential temperature and finding its maximum in the specified layer. Parameters ---------- pressure: `pint.Quantity` Atmospheric pressure profile temperature: `pint.Quantity` Atmospheric temperature profile dewpoint: `pint.Quantity` Atmospheric dewpoint profile heights: `pint.Quantity`, optional Atmospheric height profile. Standard atmosphere assumed when None (the default). bottom: `pint.Quantity`, optional Bottom of the layer to consider for the calculation in pressure or height. Defaults to using the bottom pressure or height. depth: `pint.Quantity`, optional Depth of the layer to consider for the calculation in pressure or height. Defaults to 300 hPa. Returns ------- `pint.Quantity` Pressure, temperature, and dew point of most unstable parcel in the profile. integer Index of the most unstable parcel in the given profile See Also -------- get_layer
f8476:m33
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def isentropic_interpolation(theta_levels, pressure, temperature, *args, **kwargs):
<EOL>def _isen_iter(iter_log_p, isentlevs_nd, ka, a, b, pok):<EOL><INDENT>exner = pok * np.exp(-ka * iter_log_p)<EOL>t = a * iter_log_p + b<EOL>f = isentlevs_nd - t * exner<EOL>fp = exner * (ka * t - a)<EOL>return iter_log_p - (f / fp)<EOL><DEDENT>tmpk_out = kwargs.pop('<STR_LIT>', False)<EOL>max_iters = kwargs.pop('<STR_LIT>', <NUM_LIT:50>)<EOL>eps = kwargs.pop('<STR_LIT>', <NUM_LIT>)<EOL>axis = kwargs.pop('<STR_LIT>', <NUM_LIT:0>)<EOL>bottom_up_search = kwargs.pop('<STR_LIT>', True)<EOL>ndim = temperature.ndim<EOL>pres = pressure.to('<STR_LIT>')<EOL>temperature = temperature.to('<STR_LIT>')<EOL>slices = [np.newaxis] * ndim<EOL>slices[axis] = slice(None)<EOL>slices = tuple(slices)<EOL>pres = np.broadcast_to(pres[slices], temperature.shape) * pres.units<EOL>sort_pres = np.argsort(pres.m, axis=axis)<EOL>sort_pres = np.swapaxes(np.swapaxes(sort_pres, <NUM_LIT:0>, axis)[::-<NUM_LIT:1>], <NUM_LIT:0>, axis)<EOL>sorter = broadcast_indices(pres, sort_pres, ndim, axis)<EOL>levs = pres[sorter]<EOL>tmpk = temperature[sorter]<EOL>theta_levels = np.asanyarray(theta_levels.to('<STR_LIT>')).reshape(-<NUM_LIT:1>)<EOL>isentlevels = theta_levels[np.argsort(theta_levels)]<EOL>shape = list(temperature.shape)<EOL>shape[axis] = isentlevels.size<EOL>isentlevs_nd = np.broadcast_to(isentlevels[slices], shape)<EOL>ka = mpconsts.kappa.m_as('<STR_LIT>')<EOL>pres_theta = potential_temperature(levs, tmpk)<EOL>if np.max(pres_theta.m) < np.max(theta_levels):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>log_p = np.log(levs.m)<EOL>pok = mpconsts.P0 ** ka<EOL>above, below, good = find_bounding_indices(pres_theta.m, theta_levels, axis,<EOL>from_below=bottom_up_search)<EOL>a = (tmpk.m[above] - tmpk.m[below]) / (log_p[above] - log_p[below])<EOL>b = tmpk.m[above] - a * log_p[above]<EOL>isentprs = <NUM_LIT:0.5> * (log_p[above] + log_p[below])<EOL>good &= ~np.isnan(a)<EOL>log_p_solved = so.fixed_point(_isen_iter, isentprs[good],<EOL>args=(isentlevs_nd[good], ka, a[good], b[good], pok.m),<EOL>xtol=eps, maxiter=max_iters)<EOL>isentprs[good] = np.exp(log_p_solved)<EOL>isentprs[~(good & _less_or_close(isentprs, np.max(pres.m)))] = np.nan<EOL>ret = [isentprs * units.hPa]<EOL>if tmpk_out:<EOL><INDENT>ret.append((isentlevs_nd / ((mpconsts.P0.m / isentprs) ** ka)) * units.kelvin)<EOL><DEDENT>if args:<EOL><INDENT>others = interpolate_1d(isentlevels, pres_theta.m, *(arr[sorter] for arr in args),<EOL>axis=axis)<EOL>if len(args) > <NUM_LIT:1>:<EOL><INDENT>ret.extend(others)<EOL><DEDENT>else:<EOL><INDENT>ret.append(others)<EOL><DEDENT><DEDENT>return ret<EOL>
r"""Interpolate data in isobaric coordinates to isentropic coordinates. Parameters ---------- theta_levels : array One-dimensional array of desired theta surfaces pressure : array One-dimensional array of pressure levels temperature : array Array of temperature args : array, optional Any additional variables will be interpolated to each isentropic level. Returns ------- list List with pressure at each isentropic level, followed by each additional argument interpolated to isentropic coordinates. Other Parameters ---------------- axis : int, optional The axis corresponding to the vertical in the temperature array, defaults to 0. tmpk_out : bool, optional If true, will calculate temperature and output as the last item in the output list. Defaults to False. max_iters : int, optional The maximum number of iterations to use in calculation, defaults to 50. eps : float, optional The desired absolute error in the calculated value, defaults to 1e-6. bottom_up_search : bool, optional Controls whether to search for theta levels bottom-up, or top-down. Defaults to True, which is bottom-up search. Notes ----- Input variable arrays must have the same number of vertical levels as the pressure levels array. Pressure is calculated on isentropic surfaces by assuming that temperature varies linearly with the natural log of pressure. Linear interpolation is then used in the vertical to find the pressure at each isentropic level. Interpolation method from [Ziv1994]_. Any additional arguments are assumed to vary linearly with temperature and will be linearly interpolated to the new isentropic levels. See Also -------- potential_temperature
f8476:m34
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def surface_based_cape_cin(pressure, temperature, dewpoint):
p, t, td, profile = parcel_profile_with_lcl(pressure, temperature, dewpoint)<EOL>return cape_cin(p, t, td, profile)<EOL>
r"""Calculate surface-based CAPE and CIN. Calculate the convective available potential energy (CAPE) and convective inhibition (CIN) of a given upper air profile for a surface-based parcel. CIN is integrated between the surface and LFC, CAPE is integrated between the LFC and EL (or top of sounding). Intersection points of the measured temperature profile and parcel profile are linearly interpolated. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure profile. The first entry should be the starting (surface) observation. temperature : `pint.Quantity` Temperature profile dewpoint : `pint.Quantity` Dewpoint profile Returns ------- `pint.Quantity` Surface based Convective Available Potential Energy (CAPE). `pint.Quantity` Surface based Convective INhibition (CIN). See Also -------- cape_cin, parcel_profile
f8476:m35
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def most_unstable_cape_cin(pressure, temperature, dewpoint, **kwargs):
_, parcel_temperature, parcel_dewpoint, parcel_idx = most_unstable_parcel(pressure,<EOL>temperature,<EOL>dewpoint,<EOL>**kwargs)<EOL>mu_profile = parcel_profile(pressure[parcel_idx:], parcel_temperature, parcel_dewpoint)<EOL>return cape_cin(pressure[parcel_idx:], temperature[parcel_idx:],<EOL>dewpoint[parcel_idx:], mu_profile)<EOL>
r"""Calculate most unstable CAPE/CIN. Calculate the convective available potential energy (CAPE) and convective inhibition (CIN) of a given upper air profile and most unstable parcel path. CIN is integrated between the surface and LFC, CAPE is integrated between the LFC and EL (or top of sounding). Intersection points of the measured temperature profile and parcel profile are linearly interpolated. Parameters ---------- pressure : `pint.Quantity` Pressure profile temperature : `pint.Quantity` Temperature profile dewpoint : `pint.Quantity` Dewpoint profile Returns ------- `pint.Quantity` Most unstable Convective Available Potential Energy (CAPE). `pint.Quantity` Most unstable Convective INhibition (CIN). See Also -------- cape_cin, most_unstable_parcel, parcel_profile
f8476:m36
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def mixed_parcel(p, temperature, dewpt, parcel_start_pressure=None,<EOL>heights=None, bottom=None, depth=<NUM_LIT:100> * units.hPa, interpolate=True):
<EOL>if not parcel_start_pressure:<EOL><INDENT>parcel_start_pressure = p[<NUM_LIT:0>]<EOL><DEDENT>theta = potential_temperature(p, temperature)<EOL>mixing_ratio = saturation_mixing_ratio(p, dewpt)<EOL>mean_theta, mean_mixing_ratio = mixed_layer(p, theta, mixing_ratio, bottom=bottom,<EOL>heights=heights, depth=depth,<EOL>interpolate=interpolate)<EOL>mean_temperature = (mean_theta / potential_temperature(parcel_start_pressure,<EOL><NUM_LIT:1> * units.kelvin)) * units.kelvin<EOL>mean_vapor_pressure = vapor_pressure(parcel_start_pressure, mean_mixing_ratio)<EOL>mean_dewpoint = dewpoint(mean_vapor_pressure)<EOL>return (parcel_start_pressure, mean_temperature.to(temperature.units),<EOL>mean_dewpoint.to(dewpt.units))<EOL>
r"""Calculate the properties of a parcel mixed from a layer. Determines the properties of an air parcel that is the result of complete mixing of a given atmospheric layer. Parameters ---------- p : `pint.Quantity` Atmospheric pressure profile temperature : `pint.Quantity` Atmospheric temperature profile dewpt : `pint.Quantity` Atmospheric dewpoint profile parcel_start_pressure : `pint.Quantity`, optional Pressure at which the mixed parcel should begin (default None) heights: `pint.Quantity`, optional Atmospheric heights corresponding to the given pressures (default None) bottom : `pint.Quantity`, optional The bottom of the layer as a pressure or height above the surface pressure (default None) depth : `pint.Quantity`, optional The thickness of the layer as a pressure or height above the bottom of the layer (default 100 hPa) interpolate : bool, optional Interpolate the top and bottom points if they are not in the given data Returns ------- `pint.Quantity, pint.Quantity, pint.Quantity` The pressure, temperature, and dewpoint of the mixed parcel.
f8476:m37
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def mixed_layer(p, *args, **kwargs):
<EOL>heights = kwargs.pop('<STR_LIT>', None)<EOL>bottom = kwargs.pop('<STR_LIT>', None)<EOL>depth = kwargs.pop('<STR_LIT>', <NUM_LIT:100> * units.hPa)<EOL>interpolate = kwargs.pop('<STR_LIT>', True)<EOL>layer = get_layer(p, *args, heights=heights, bottom=bottom,<EOL>depth=depth, interpolate=interpolate)<EOL>p_layer = layer[<NUM_LIT:0>]<EOL>datavars_layer = layer[<NUM_LIT:1>:]<EOL>ret = []<EOL>for datavar_layer in datavars_layer:<EOL><INDENT>actual_depth = abs(p_layer[<NUM_LIT:0>] - p_layer[-<NUM_LIT:1>])<EOL>ret.append((-<NUM_LIT:1.> / actual_depth.m) * np.trapz(datavar_layer, p_layer)<EOL>* datavar_layer.units)<EOL><DEDENT>return ret<EOL>
r"""Mix variable(s) over a layer, yielding a mass-weighted average. This function will integrate a data variable with respect to pressure and determine the average value using the mean value theorem. Parameters ---------- p : array-like Atmospheric pressure profile datavar : array-like Atmospheric variable measured at the given pressures heights: array-like, optional Atmospheric heights corresponding to the given pressures (default None) bottom : `pint.Quantity`, optional The bottom of the layer as a pressure or height above the surface pressure (default None) depth : `pint.Quantity`, optional The thickness of the layer as a pressure or height above the bottom of the layer (default 100 hPa) interpolate : bool, optional Interpolate the top and bottom points if they are not in the given data Returns ------- `pint.Quantity` The mixed value of the data variable.
f8476:m38
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def dry_static_energy(heights, temperature):
return (mpconsts.g * heights + mpconsts.Cp_d * temperature).to('<STR_LIT>')<EOL>
r"""Calculate the dry static energy of parcels. This function will calculate the dry static energy following the first two terms of equation 3.72 in [Hobbs2006]_. Notes ----- .. math::\text{dry static energy} = c_{pd} * T + gz * :math:`T` is temperature * :math:`z` is height Parameters ---------- heights : array-like Atmospheric height temperature : array-like Atmospheric temperature Returns ------- `pint.Quantity` The dry static energy
f8476:m39
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def moist_static_energy(heights, temperature, specific_humidity):
return (dry_static_energy(heights, temperature)<EOL>+ mpconsts.Lv * specific_humidity.to('<STR_LIT>')).to('<STR_LIT>')<EOL>
r"""Calculate the moist static energy of parcels. This function will calculate the moist static energy following equation 3.72 in [Hobbs2006]_. Notes ----- .. math::\text{moist static energy} = c_{pd} * T + gz + L_v q * :math:`T` is temperature * :math:`z` is height * :math:`q` is specific humidity Parameters ---------- heights : array-like Atmospheric height temperature : array-like Atmospheric temperature specific_humidity : array-like Atmospheric specific humidity Returns ------- `pint.Quantity` The moist static energy
f8476:m40
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def thickness_hydrostatic(pressure, temperature, **kwargs):
mixing = kwargs.pop('<STR_LIT>', None)<EOL>molecular_weight_ratio = kwargs.pop('<STR_LIT>', mpconsts.epsilon)<EOL>bottom = kwargs.pop('<STR_LIT>', None)<EOL>depth = kwargs.pop('<STR_LIT>', None)<EOL>if bottom is None and depth is None:<EOL><INDENT>if mixing is None:<EOL><INDENT>layer_p, layer_virttemp = pressure, temperature<EOL><DEDENT>else:<EOL><INDENT>layer_p = pressure<EOL>layer_virttemp = virtual_temperature(temperature, mixing, molecular_weight_ratio)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>if mixing is None:<EOL><INDENT>layer_p, layer_virttemp = get_layer(pressure, temperature, bottom=bottom,<EOL>depth=depth)<EOL><DEDENT>else:<EOL><INDENT>layer_p, layer_temp, layer_w = get_layer(pressure, temperature, mixing,<EOL>bottom=bottom, depth=depth)<EOL>layer_virttemp = virtual_temperature(layer_temp, layer_w, molecular_weight_ratio)<EOL><DEDENT><DEDENT>return (- mpconsts.Rd / mpconsts.g * np.trapz(<EOL>layer_virttemp.to('<STR_LIT>'), x=np.log(layer_p / units.hPa)) * units.K).to('<STR_LIT:m>')<EOL>
r"""Calculate the thickness of a layer via the hypsometric equation. This thickness calculation uses the pressure and temperature profiles (and optionally mixing ratio) via the hypsometric equation with virtual temperature adjustment .. math:: Z_2 - Z_1 = -\frac{R_d}{g} \int_{p_1}^{p_2} T_v d\ln p, which is based off of Equation 3.24 in [Hobbs2006]_. This assumes a hydrostatic atmosphere. Layer bottom and depth specified in pressure. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure profile temperature : `pint.Quantity` Atmospheric temperature profile mixing : `pint.Quantity`, optional Profile of dimensionless mass mixing ratio. If none is given, virtual temperature is simply set to be the given temperature. molecular_weight_ratio : `pint.Quantity` or float, optional The ratio of the molecular weight of the constituent gas to that assumed for air. Defaults to the ratio for water vapor to dry air. (:math:`\epsilon\approx0.622`). bottom : `pint.Quantity`, optional The bottom of the layer in pressure. Defaults to the first observation. depth : `pint.Quantity`, optional The depth of the layer in hPa. Defaults to the full profile if bottom is not given, and 100 hPa if bottom is given. Returns ------- `pint.Quantity` The thickness of the layer in meters. See Also -------- thickness_hydrostatic_from_relative_humidity, pressure_to_height_std, virtual_temperature
f8476:m41
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def thickness_hydrostatic_from_relative_humidity(pressure, temperature, relative_humidity,<EOL>**kwargs):
bottom = kwargs.pop('<STR_LIT>', None)<EOL>depth = kwargs.pop('<STR_LIT>', None)<EOL>mixing = mixing_ratio_from_relative_humidity(relative_humidity, temperature, pressure)<EOL>return thickness_hydrostatic(pressure, temperature, mixing=mixing, bottom=bottom,<EOL>depth=depth)<EOL>
r"""Calculate the thickness of a layer given pressure, temperature and relative humidity. Similar to ``thickness_hydrostatic``, this thickness calculation uses the pressure, temperature, and relative humidity profiles via the hypsometric equation with virtual temperature adjustment. .. math:: Z_2 - Z_1 = -\frac{R_d}{g} \int_{p_1}^{p_2} T_v d\ln p, which is based off of Equation 3.24 in [Hobbs2006]_. Virtual temperature is calculated from the profiles of temperature and relative humidity. This assumes a hydrostatic atmosphere. Layer bottom and depth specified in pressure. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure profile temperature : `pint.Quantity` Atmospheric temperature profile relative_humidity : `pint.Quantity` Atmospheric relative humidity profile. The relative humidity is expressed as a unitless ratio in the range [0, 1]. Can also pass a percentage if proper units are attached. bottom : `pint.Quantity`, optional The bottom of the layer in pressure. Defaults to the first observation. depth : `pint.Quantity`, optional The depth of the layer in hPa. Defaults to the full profile if bottom is not given, and 100 hPa if bottom is given. Returns ------- `pint.Quantity` The thickness of the layer in meters. See Also -------- thickness_hydrostatic, pressure_to_height_std, virtual_temperature, mixing_ratio_from_relative_humidity
f8476:m42
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def brunt_vaisala_frequency_squared(heights, potential_temperature, axis=<NUM_LIT:0>):
<EOL>potential_temperature = potential_temperature.to('<STR_LIT>')<EOL>return mpconsts.g / potential_temperature * first_derivative(potential_temperature,<EOL>x=heights, axis=axis)<EOL>
r"""Calculate the square of the Brunt-Vaisala frequency. Brunt-Vaisala frequency squared (a measure of atmospheric stability) is given by the formula: .. math:: N^2 = \frac{g}{\theta} \frac{d\theta}{dz} This formula is based off of Equations 3.75 and 3.77 in [Hobbs2006]_. Parameters ---------- heights : array-like One-dimensional profile of atmospheric height potential_temperature : array-like Atmospheric potential temperature axis : int, optional The axis corresponding to vertical in the potential temperature array, defaults to 0. Returns ------- array-like The square of the Brunt-Vaisala frequency. See Also -------- brunt_vaisala_frequency, brunt_vaisala_period, potential_temperature
f8476:m43
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def brunt_vaisala_frequency(heights, potential_temperature, axis=<NUM_LIT:0>):
bv_freq_squared = brunt_vaisala_frequency_squared(heights, potential_temperature,<EOL>axis=axis)<EOL>bv_freq_squared[bv_freq_squared.magnitude < <NUM_LIT:0>] = np.nan<EOL>return np.sqrt(bv_freq_squared)<EOL>
r"""Calculate the Brunt-Vaisala frequency. This function will calculate the Brunt-Vaisala frequency as follows: .. math:: N = \left( \frac{g}{\theta} \frac{d\theta}{dz} \right)^\frac{1}{2} This formula based off of Equations 3.75 and 3.77 in [Hobbs2006]_. This function is a wrapper for `brunt_vaisala_frequency_squared` that filters out negative (unstable) quanties and takes the square root. Parameters ---------- heights : array-like One-dimensional profile of atmospheric height potential_temperature : array-like Atmospheric potential temperature axis : int, optional The axis corresponding to vertical in the potential temperature array, defaults to 0. Returns ------- array-like Brunt-Vaisala frequency. See Also -------- brunt_vaisala_frequency_squared, brunt_vaisala_period, potential_temperature
f8476:m44
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def brunt_vaisala_period(heights, potential_temperature, axis=<NUM_LIT:0>):
bv_freq_squared = brunt_vaisala_frequency_squared(heights, potential_temperature,<EOL>axis=axis)<EOL>bv_freq_squared[bv_freq_squared.magnitude <= <NUM_LIT:0>] = np.nan<EOL>return <NUM_LIT:2> * np.pi / np.sqrt(bv_freq_squared)<EOL>
r"""Calculate the Brunt-Vaisala period. This function is a helper function for `brunt_vaisala_frequency` that calculates the period of oscilation as in Exercise 3.13 of [Hobbs2006]_: .. math:: \tau = \frac{2\pi}{N} Returns `NaN` when :math:`N^2 > 0`. Parameters ---------- heights : array-like One-dimensional profile of atmospheric height potential_temperature : array-like Atmospheric potential temperature axis : int, optional The axis corresponding to vertical in the potential temperature array, defaults to 0. Returns ------- array-like Brunt-Vaisala period. See Also -------- brunt_vaisala_frequency, brunt_vaisala_frequency_squared, potential_temperature
f8476:m45
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def wet_bulb_temperature(pressure, temperature, dewpoint):
if not hasattr(pressure, '<STR_LIT>'):<EOL><INDENT>pressure = atleast_1d(pressure)<EOL>temperature = atleast_1d(temperature)<EOL>dewpoint = atleast_1d(dewpoint)<EOL><DEDENT>it = np.nditer([pressure, temperature, dewpoint, None],<EOL>op_dtypes=['<STR_LIT:float>', '<STR_LIT:float>', '<STR_LIT:float>', '<STR_LIT:float>'],<EOL>flags=['<STR_LIT>'])<EOL>for press, temp, dewp, ret in it:<EOL><INDENT>press = press * pressure.units<EOL>temp = temp * temperature.units<EOL>dewp = dewp * dewpoint.units<EOL>lcl_pressure, lcl_temperature = lcl(press, temp, dewp)<EOL>moist_adiabat_temperatures = moist_lapse(concatenate([lcl_pressure, press]),<EOL>lcl_temperature)<EOL>ret[...] = moist_adiabat_temperatures[-<NUM_LIT:1>]<EOL><DEDENT>if it.operands[<NUM_LIT:3>].size == <NUM_LIT:1>:<EOL><INDENT>return it.operands[<NUM_LIT:3>][<NUM_LIT:0>] * moist_adiabat_temperatures.units<EOL><DEDENT>return it.operands[<NUM_LIT:3>] * moist_adiabat_temperatures.units<EOL>
Calculate the wet-bulb temperature using Normand's rule. This function calculates the wet-bulb temperature using the Normand method. The LCL is computed, and that parcel brought down to the starting pressure along a moist adiabat. The Normand method (and others) are described and compared by [Knox2017]_. Parameters ---------- pressure : `pint.Quantity` Initial atmospheric pressure temperature : `pint.Quantity` Initial atmospheric temperature dewpoint : `pint.Quantity` Initial atmospheric dewpoint Returns ------- array-like Wet-bulb temperature See Also -------- lcl, moist_lapse
f8476:m46
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def static_stability(pressure, temperature, axis=<NUM_LIT:0>):
theta = potential_temperature(pressure, temperature)<EOL>return - mpconsts.Rd * temperature / pressure * first_derivative(np.log(theta / units.K),<EOL>x=pressure, axis=axis)<EOL>
r"""Calculate the static stability within a vertical profile. .. math:: \sigma = -\frac{RT}{p} \frac{\partial \ln \theta}{\partial p} This formuala is based on equation 4.3.6 in [Bluestein1992]_. Parameters ---------- pressure : array-like Profile of atmospheric pressure temperature : array-like Profile of temperature axis : int, optional The axis corresponding to vertical in the pressure and temperature arrays, defaults to 0. Returns ------- array-like The profile of static stability.
f8476:m47
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def dewpoint_from_specific_humidity(specific_humidity, temperature, pressure):
return dewpoint_rh(temperature, relative_humidity_from_specific_humidity(specific_humidity,<EOL>temperature,<EOL>pressure))<EOL>
r"""Calculate the dewpoint from specific humidity, temperature, and pressure. Parameters ---------- specific_humidity: `pint.Quantity` Specific humidity of air temperature: `pint.Quantity` Air temperature pressure: `pint.Quantity` Total atmospheric pressure Returns ------- `pint.Quantity` Dewpoint temperature See Also -------- relative_humidity_from_mixing_ratio, dewpoint_rh
f8476:m48
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def vertical_velocity_pressure(w, pressure, temperature, mixing=<NUM_LIT:0>):
rho = density(pressure, temperature, mixing)<EOL>return (- mpconsts.g * rho * w).to('<STR_LIT>')<EOL>
r"""Calculate omega from w assuming hydrostatic conditions. This function converts vertical velocity with respect to height :math:`\left(w = \frac{Dz}{Dt}\right)` to that with respect to pressure :math:`\left(\omega = \frac{Dp}{Dt}\right)` assuming hydrostatic conditions on the synoptic scale. By Equation 7.33 in [Hobbs2006]_, .. math: \omega \simeq -\rho g w Density (:math:`\rho`) is calculated using the :func:`density` function, from the given pressure and temperature. If `mixing` is given, the virtual temperature correction is used, otherwise, dry air is assumed. Parameters ---------- w: `pint.Quantity` Vertical velocity in terms of height pressure: `pint.Quantity` Total atmospheric pressure temperature: `pint.Quantity` Air temperature mixing: `pint.Quantity`, optional Mixing ratio of air Returns ------- `pint.Quantity` Vertical velocity in terms of pressure (in Pascals / second) See Also -------- density, vertical_velocity
f8476:m49
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def vertical_velocity(omega, pressure, temperature, mixing=<NUM_LIT:0>):
rho = density(pressure, temperature, mixing)<EOL>return (omega / (- mpconsts.g * rho)).to('<STR_LIT>')<EOL>
r"""Calculate w from omega assuming hydrostatic conditions. This function converts vertical velocity with respect to pressure :math:`\left(\omega = \frac{Dp}{Dt}\right)` to that with respect to height :math:`\left(w = \frac{Dz}{Dt}\right)` assuming hydrostatic conditions on the synoptic scale. By Equation 7.33 in [Hobbs2006]_, .. math: \omega \simeq -\rho g w so that .. math w \simeq \frac{- \omega}{\rho g} Density (:math:`\rho`) is calculated using the :func:`density` function, from the given pressure and temperature. If `mixing` is given, the virtual temperature correction is used, otherwise, dry air is assumed. Parameters ---------- omega: `pint.Quantity` Vertical velocity in terms of pressure pressure: `pint.Quantity` Total atmospheric pressure temperature: `pint.Quantity` Air temperature mixing: `pint.Quantity`, optional Mixing ratio of air Returns ------- `pint.Quantity` Vertical velocity in terms of height (in meters / second) See Also -------- density, vertical_velocity_pressure
f8476:m50
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def precipitable_water(dewpt, pressure, bottom=None, top=None):
<EOL>sort_inds = np.argsort(pressure)[::-<NUM_LIT:1>]<EOL>pressure = pressure[sort_inds]<EOL>dewpt = dewpt[sort_inds]<EOL>if top is None:<EOL><INDENT>top = np.nanmin(pressure) * pressure.units<EOL><DEDENT>if bottom is None:<EOL><INDENT>bottom = np.nanmax(pressure) * pressure.units<EOL><DEDENT>pres_layer, dewpt_layer = get_layer(pressure, dewpt, bottom=bottom, depth=bottom - top)<EOL>w = mixing_ratio(saturation_vapor_pressure(dewpt_layer), pres_layer)<EOL>pw = -<NUM_LIT:1.> * (np.trapz(w.magnitude, pres_layer.magnitude) * (w.units * pres_layer.units)<EOL>/ (mpconsts.g * mpconsts.rho_l))<EOL>return pw.to('<STR_LIT>')<EOL>
r"""Calculate precipitable water through the depth of a sounding. Formula used is: .. math:: -\frac{1}{\rho_l g} \int\limits_{p_\text{bottom}}^{p_\text{top}} r dp from [Salby1996]_, p. 28. Parameters ---------- dewpt : `pint.Quantity` Atmospheric dewpoint profile pressure : `pint.Quantity` Atmospheric pressure profile bottom: `pint.Quantity`, optional Bottom of the layer, specified in pressure. Defaults to None (highest pressure). top: `pint.Quantity`, optional The top of the layer, specified in pressure. Defaults to None (lowest pressure). Returns ------- `pint.Quantity` The precipitable water in the layer
f8477:m0
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def mean_pressure_weighted(pressure, *args, **kwargs):
heights = kwargs.pop('<STR_LIT>', None)<EOL>bottom = kwargs.pop('<STR_LIT>', None)<EOL>depth = kwargs.pop('<STR_LIT>', None)<EOL>ret = [] <EOL>layer_arg = get_layer(pressure, *args, heights=heights,<EOL>bottom=bottom, depth=depth)<EOL>layer_p = layer_arg[<NUM_LIT:0>]<EOL>layer_arg = layer_arg[<NUM_LIT:1>:]<EOL>pres_int = <NUM_LIT:0.5> * (layer_p[-<NUM_LIT:1>].magnitude**<NUM_LIT:2> - layer_p[<NUM_LIT:0>].magnitude**<NUM_LIT:2>)<EOL>for i, datavar in enumerate(args):<EOL><INDENT>arg_mean = np.trapz(layer_arg[i] * layer_p, x=layer_p) / pres_int<EOL>ret.append(arg_mean * datavar.units)<EOL><DEDENT>return ret<EOL>
r"""Calculate pressure-weighted mean of an arbitrary variable through a layer. Layer top and bottom specified in height or pressure. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure profile *args : `pint.Quantity` Parameters for which the pressure-weighted mean is to be calculated. heights : `pint.Quantity`, optional Heights from sounding. Standard atmosphere heights assumed (if needed) if no heights are given. bottom: `pint.Quantity`, optional The bottom of the layer in either the provided height coordinate or in pressure. Don't provide in meters AGL unless the provided height coordinate is meters AGL. Default is the first observation, assumed to be the surface. depth: `pint.Quantity`, optional The depth of the layer in meters or hPa. Returns ------- `pint.Quantity` u_mean: u-component of layer mean wind. `pint.Quantity` v_mean: v-component of layer mean wind.
f8477:m1
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def bunkers_storm_motion(pressure, u, v, heights):
<EOL>wind_mean = concatenate(mean_pressure_weighted(pressure, u, v, heights=heights,<EOL>depth=<NUM_LIT> * units('<STR_LIT>')))<EOL>wind_500m = concatenate(mean_pressure_weighted(pressure, u, v, heights=heights,<EOL>depth=<NUM_LIT> * units('<STR_LIT>')))<EOL>wind_5500m = concatenate(mean_pressure_weighted(pressure, u, v, heights=heights,<EOL>depth=<NUM_LIT> * units('<STR_LIT>'),<EOL>bottom=heights[<NUM_LIT:0>] + <NUM_LIT> * units('<STR_LIT>')))<EOL>shear = wind_5500m - wind_500m<EOL>shear_cross = concatenate([shear[<NUM_LIT:1>], -shear[<NUM_LIT:0>]])<EOL>rdev = shear_cross * (<NUM_LIT> * units('<STR_LIT>').to(u.units) / np.hypot(*shear))<EOL>right_mover = wind_mean + rdev<EOL>left_mover = wind_mean - rdev<EOL>return right_mover, left_mover, wind_mean<EOL>
r"""Calculate the Bunkers right-mover and left-mover storm motions and sfc-6km mean flow. Uses the storm motion calculation from [Bunkers2000]_. Parameters ---------- pressure : array-like Pressure from sounding u : array-like U component of the wind v : array-like V component of the wind heights : array-like Heights from sounding Returns ------- right_mover: `pint.Quantity` U and v component of Bunkers RM storm motion left_mover: `pint.Quantity` U and v component of Bunkers LM storm motion wind_mean: `pint.Quantity` U and v component of sfc-6km mean flow
f8477:m2
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def bulk_shear(pressure, u, v, heights=None, bottom=None, depth=None):
_, u_layer, v_layer = get_layer(pressure, u, v, heights=heights,<EOL>bottom=bottom, depth=depth)<EOL>u_shr = u_layer[-<NUM_LIT:1>] - u_layer[<NUM_LIT:0>]<EOL>v_shr = v_layer[-<NUM_LIT:1>] - v_layer[<NUM_LIT:0>]<EOL>return u_shr, v_shr<EOL>
r"""Calculate bulk shear through a layer. Layer top and bottom specified in meters or pressure. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure profile u : `pint.Quantity` U-component of wind. v : `pint.Quantity` V-component of wind. height : `pint.Quantity`, optional Heights from sounding depth: `pint.Quantity`, optional The depth of the layer in meters or hPa. Defaults to 100 hPa. bottom: `pint.Quantity`, optional The bottom of the layer in height or pressure coordinates. If using a height, it must be in the same coordinates as the given heights (i.e., don't use meters AGL unless given heights are in meters AGL.) Defaults to the highest pressure or lowest height given. Returns ------- u_shr: `pint.Quantity` u-component of layer bulk shear v_shr: `pint.Quantity` v-component of layer bulk shear
f8477:m3
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def supercell_composite(mucape, effective_storm_helicity, effective_shear):
effective_shear = np.clip(atleast_1d(effective_shear), None, <NUM_LIT:20> * units('<STR_LIT>'))<EOL>effective_shear[effective_shear < <NUM_LIT:10> * units('<STR_LIT>')] = <NUM_LIT:0> * units('<STR_LIT>')<EOL>effective_shear = effective_shear / (<NUM_LIT:20> * units('<STR_LIT>'))<EOL>return ((mucape / (<NUM_LIT:1000> * units('<STR_LIT>')))<EOL>* (effective_storm_helicity / (<NUM_LIT:50> * units('<STR_LIT>')))<EOL>* effective_shear).to('<STR_LIT>')<EOL>
r"""Calculate the supercell composite parameter. The supercell composite parameter is designed to identify environments favorable for the development of supercells, and is calculated using the formula developed by [Thompson2004]_: .. math:: \text{SCP} = \frac{\text{MUCAPE}}{1000 \text{J/kg}} * \frac{\text{Effective SRH}}{50 \text{m}^2/\text{s}^2} * \frac{\text{Effective Shear}}{20 \text{m/s}} The effective_shear term is set to zero below 10 m/s and capped at 1 when effective_shear exceeds 20 m/s. Parameters ---------- mucape : `pint.Quantity` Most-unstable CAPE effective_storm_helicity : `pint.Quantity` Effective-layer storm-relative helicity effective_shear : `pint.Quantity` Effective bulk shear Returns ------- array-like supercell composite
f8477:m4
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def significant_tornado(sbcape, surface_based_lcl_height, storm_helicity_1km, shear_6km):
surface_based_lcl_height = np.clip(atleast_1d(surface_based_lcl_height),<EOL><NUM_LIT:1000> * units.m, <NUM_LIT> * units.m)<EOL>surface_based_lcl_height[surface_based_lcl_height > <NUM_LIT> * units.m] = <NUM_LIT:0> * units.m<EOL>surface_based_lcl_height = ((<NUM_LIT> * units.m - surface_based_lcl_height)<EOL>/ (<NUM_LIT> * units.m))<EOL>shear_6km = np.clip(atleast_1d(shear_6km), None, <NUM_LIT:30> * units('<STR_LIT>'))<EOL>shear_6km[shear_6km < <NUM_LIT> * units('<STR_LIT>')] = <NUM_LIT:0> * units('<STR_LIT>')<EOL>shear_6km /= <NUM_LIT:20> * units('<STR_LIT>')<EOL>return ((sbcape / (<NUM_LIT> * units('<STR_LIT>')))<EOL>* surface_based_lcl_height<EOL>* (storm_helicity_1km / (<NUM_LIT> * units('<STR_LIT>')))<EOL>* shear_6km)<EOL>
r"""Calculate the significant tornado parameter (fixed layer). The significant tornado parameter is designed to identify environments favorable for the production of significant tornadoes contingent upon the development of supercells. It's calculated according to the formula used on the SPC mesoanalysis page, updated in [Thompson2004]_: .. math:: \text{SIGTOR} = \frac{\text{SBCAPE}}{1500 \text{J/kg}} * \frac{(2000 \text{m} - \text{LCL}_\text{SB})}{1000 \text{m}} * \frac{SRH_{\text{1km}}}{150 \text{m}^\text{s}/\text{s}^2} * \frac{\text{Shear}_\text{6km}}{20 \text{m/s}} The lcl height is set to zero when the lcl is above 2000m and capped at 1 when below 1000m, and the shr6 term is set to 0 when shr6 is below 12.5 m/s and maxed out at 1.5 when shr6 exceeds 30 m/s. Parameters ---------- sbcape : `pint.Quantity` Surface-based CAPE surface_based_lcl_height : `pint.Quantity` Surface-based lifted condensation level storm_helicity_1km : `pint.Quantity` Surface-1km storm-relative helicity shear_6km : `pint.Quantity` Surface-6km bulk shear Returns ------- array-like significant tornado parameter
f8477:m5
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def critical_angle(pressure, u, v, heights, stormu, stormv):
<EOL>u = u.to('<STR_LIT>')<EOL>v = v.to('<STR_LIT>')<EOL>stormu = stormu.to('<STR_LIT>')<EOL>stormv = stormv.to('<STR_LIT>')<EOL>sort_inds = np.argsort(pressure[::-<NUM_LIT:1>])<EOL>pressure = pressure[sort_inds]<EOL>heights = heights[sort_inds]<EOL>u = u[sort_inds]<EOL>v = v[sort_inds]<EOL>shr5 = bulk_shear(pressure, u, v, heights=heights, depth=<NUM_LIT> * units('<STR_LIT>'))<EOL>umn = stormu - u[<NUM_LIT:0>]<EOL>vmn = stormv - v[<NUM_LIT:0>]<EOL>vshr = np.asarray([shr5[<NUM_LIT:0>].magnitude, shr5[<NUM_LIT:1>].magnitude])<EOL>vsm = np.asarray([umn.magnitude, vmn.magnitude])<EOL>angle_c = np.dot(vshr, vsm) / (np.linalg.norm(vshr) * np.linalg.norm(vsm))<EOL>critical_angle = np.arccos(angle_c) * units('<STR_LIT>')<EOL>return critical_angle.to('<STR_LIT>')<EOL>
r"""Calculate the critical angle. The critical angle is the angle between the 10m storm-relative inflow vector and the 10m-500m shear vector. A critical angle near 90 degrees indicates that a storm in this environment on the indicated storm motion vector is likely ingesting purely streamwise vorticity into its updraft, and [Esterheld2008]_ showed that significantly tornadic supercells tend to occur in environments with critical angles near 90 degrees. Parameters ---------- pressure : `pint.Quantity` Pressures from sounding. u : `pint.Quantity` U-component of sounding winds. v : `pint.Quantity` V-component of sounding winds. heights : `pint.Quantity` Heights from sounding. stormu : `pint.Quantity` U-component of storm motion. stormv : `pint.Quantity` V-component of storm motion. Returns ------- `pint.Quantity` critical angle in degrees
f8477:m6
def get_bounds_data():
pressures = np.linspace(<NUM_LIT:1000>, <NUM_LIT:100>, <NUM_LIT:10>) * units.hPa<EOL>heights = pressure_to_height_std(pressures)<EOL>return pressures, heights<EOL>
Provide pressure and height data for testing layer bounds calculation.
f8483:m15
@exporter.export<EOL>@preprocess_xarray<EOL>def wind_speed(u, v):
speed = np.sqrt(u * u + v * v)<EOL>return speed<EOL>
r"""Compute the wind speed from u and v-components. Parameters ---------- u : array_like Wind component in the X (East-West) direction v : array_like Wind component in the Y (North-South) direction Returns ------- wind speed: array_like The speed of the wind See Also -------- wind_components
f8485:m0
@exporter.export<EOL>@preprocess_xarray<EOL>def wind_direction(u, v):
wdir = <NUM_LIT> * units.deg - np.arctan2(-v, -u)<EOL>origshape = wdir.shape<EOL>wdir = atleast_1d(wdir)<EOL>wdir[wdir <= <NUM_LIT:0>] += <NUM_LIT> * units.deg<EOL>calm_mask = (np.asarray(u) == <NUM_LIT:0.>) & (np.asarray(v) == <NUM_LIT:0.>)<EOL>if np.any(calm_mask):<EOL><INDENT>wdir[calm_mask] = <NUM_LIT:0.> * units.deg<EOL><DEDENT>return wdir.reshape(origshape).to('<STR_LIT>')<EOL>
r"""Compute the wind direction from u and v-components. Parameters ---------- u : array_like Wind component in the X (East-West) direction v : array_like Wind component in the Y (North-South) direction Returns ------- direction: `pint.Quantity` The direction of the wind in interval [0, 360] degrees, specified as the direction from which it is blowing, with 360 being North. See Also -------- wind_components Notes ----- In the case of calm winds (where `u` and `v` are zero), this function returns a direction of 0.
f8485:m1
@exporter.export<EOL>@preprocess_xarray<EOL>def wind_components(speed, wdir):
wdir = _check_radians(wdir, max_radians=<NUM_LIT:4> * np.pi)<EOL>u = -speed * np.sin(wdir)<EOL>v = -speed * np.cos(wdir)<EOL>return u, v<EOL>
r"""Calculate the U, V wind vector components from the speed and direction. Parameters ---------- speed : array_like The wind speed (magnitude) wdir : array_like The wind direction, specified as the direction from which the wind is blowing (0-2 pi radians or 0-360 degrees), with 360 degrees being North. Returns ------- u, v : tuple of array_like The wind components in the X (East-West) and Y (North-South) directions, respectively. See Also -------- wind_speed wind_direction Examples -------- >>> from metpy.units import units >>> metpy.calc.wind_components(10. * units('m/s'), 225. * units.deg) (<Quantity(7.071067811865475, 'meter / second')>, <Quantity(7.071067811865477, 'meter / second')>)
f8485:m2
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum='<STR_LIT>',<EOL>pending=False)<EOL>def get_wind_speed(u, v):
return wind_speed(u, v)<EOL>
Wrap wind_speed for deprecated get_wind_speed function.
f8485:m3
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum='<STR_LIT>',<EOL>pending=False)<EOL>def get_wind_dir(u, v):
return wind_direction(u, v)<EOL>
Wrap wind_direction for deprecated get_wind_dir function.
f8485:m4
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum='<STR_LIT>',<EOL>pending=False)<EOL>def get_wind_components(u, v):
return wind_components(u, v)<EOL>
Wrap wind_components for deprecated get_wind_components function.
f8485:m5
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units(temperature='<STR_LIT>', speed='<STR_LIT>')<EOL>def windchill(temperature, speed, face_level_winds=False, mask_undefined=True):
<EOL>if face_level_winds:<EOL><INDENT>speed = speed * <NUM_LIT><EOL><DEDENT>temp_limit, speed_limit = <NUM_LIT> * units.degC, <NUM_LIT:3> * units.mph<EOL>speed_factor = speed.to('<STR_LIT>').magnitude ** <NUM_LIT><EOL>wcti = units.Quantity((<NUM_LIT> + <NUM_LIT> * speed_factor) * temperature.to('<STR_LIT>').magnitude<EOL>- <NUM_LIT> * speed_factor + <NUM_LIT>, units.degC).to(temperature.units)<EOL>if mask_undefined:<EOL><INDENT>mask = np.array((temperature > temp_limit) | (speed <= speed_limit))<EOL>if mask.any():<EOL><INDENT>wcti = masked_array(wcti, mask=mask)<EOL><DEDENT><DEDENT>return wcti<EOL>
r"""Calculate the Wind Chill Temperature Index (WCTI). Calculates WCTI from the current temperature and wind speed using the formula outlined by the FCM [FCMR192003]_. Specifically, these formulas assume that wind speed is measured at 10m. If, instead, the speeds are measured at face level, the winds need to be multiplied by a factor of 1.5 (this can be done by specifying `face_level_winds` as `True`.) Parameters ---------- temperature : `pint.Quantity` The air temperature speed : `pint.Quantity` The wind speed at 10m. If instead the winds are at face level, `face_level_winds` should be set to `True` and the 1.5 multiplicative correction will be applied automatically. face_level_winds : bool, optional A flag indicating whether the wind speeds were measured at facial level instead of 10m, thus requiring a correction. Defaults to `False`. mask_undefined : bool, optional A flag indicating whether a masked array should be returned with values where wind chill is undefined masked. These are values where the temperature > 50F or wind speed <= 3 miles per hour. Defaults to `True`. Returns ------- `pint.Quantity` The corresponding Wind Chill Temperature Index value(s) See Also -------- heat_index
f8485:m6
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def heat_index(temperature, rh, mask_undefined=True):
delta = temperature.to(units.degF) - <NUM_LIT:0.> * units.degF<EOL>rh2 = rh * rh<EOL>delta2 = delta * delta<EOL>hi = (-<NUM_LIT> * units.degF<EOL>+ <NUM_LIT> * delta<EOL>+ <NUM_LIT> * units.delta_degF * rh<EOL>- <NUM_LIT> * delta * rh<EOL>- <NUM_LIT> / units.delta_degF * delta2<EOL>- <NUM_LIT> * units.delta_degF * rh2<EOL>+ <NUM_LIT> / units.delta_degF * delta2 * rh<EOL>+ <NUM_LIT> * delta * rh2<EOL>- <NUM_LIT> / units.delta_degF * delta2 * rh2)<EOL>if mask_undefined:<EOL><INDENT>mask = np.array((temperature < <NUM_LIT> * units.degF) | (rh < <NUM_LIT> * units.percent))<EOL>if mask.any():<EOL><INDENT>hi = masked_array(hi, mask=mask)<EOL><DEDENT><DEDENT>return hi<EOL>
r"""Calculate the Heat Index from the current temperature and relative humidity. The implementation uses the formula outlined in [Rothfusz1990]_. This equation is a multi-variable least-squares regression of the values obtained in [Steadman1979]_. Parameters ---------- temperature : `pint.Quantity` Air temperature rh : array_like The relative humidity expressed as a unitless ratio in the range [0, 1]. Can also pass a percentage if proper units are attached. Returns ------- `pint.Quantity` The corresponding Heat Index value(s) Other Parameters ---------------- mask_undefined : bool, optional A flag indicating whether a masked array should be returned with values where heat index is undefined masked. These are values where the temperature < 80F or relative humidity < 40 percent. Defaults to `True`. See Also -------- windchill
f8485:m7
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units(temperature='<STR_LIT>', speed='<STR_LIT>')<EOL>def apparent_temperature(temperature, rh, speed, face_level_winds=False):
is_not_scalar = isinstance(temperature.m, (list, tuple, np.ndarray))<EOL>temperature = atleast_1d(temperature)<EOL>rh = atleast_1d(rh)<EOL>speed = atleast_1d(speed)<EOL>wind_chill_temperature = windchill(temperature, speed, face_level_winds=face_level_winds,<EOL>mask_undefined=True).to(temperature.units)<EOL>heat_index_temperature = heat_index(temperature, rh,<EOL>mask_undefined=True).to(temperature.units)<EOL>app_temperature = np.ma.where(masked_array(wind_chill_temperature).mask,<EOL>heat_index_temperature,<EOL>wind_chill_temperature)<EOL>if is_not_scalar:<EOL><INDENT>app_temperature[app_temperature.mask] = temperature[app_temperature.mask]<EOL>return np.array(app_temperature) * temperature.units<EOL><DEDENT>else:<EOL><INDENT>if app_temperature.mask:<EOL><INDENT>app_temperature = temperature.m<EOL><DEDENT>return atleast_1d(app_temperature)[<NUM_LIT:0>] * temperature.units<EOL><DEDENT>
r"""Calculate the current apparent temperature. Calculates the current apparent temperature based on the wind chill or heat index as appropriate for the current conditions. Follows [NWS10201]_. Parameters ---------- temperature : `pint.Quantity` The air temperature rh : `pint.Quantity` The relative humidity expressed as a unitless ratio in the range [0, 1]. Can also pass a percentage if proper units are attached. speed : `pint.Quantity` The wind speed at 10m. If instead the winds are at face level, `face_level_winds` should be set to `True` and the 1.5 multiplicative correction will be applied automatically. face_level_winds : bool, optional A flag indicating whether the wind speeds were measured at facial level instead of 10m, thus requiring a correction. Defaults to `False`. Returns ------- `pint.Quantity` The corresponding apparent temperature value(s) See Also -------- heat_index, windchill
f8485:m8
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def pressure_to_height_std(pressure):
t0 = <NUM_LIT> * units.kelvin<EOL>gamma = <NUM_LIT> * units('<STR_LIT>')<EOL>p0 = <NUM_LIT> * units.mbar<EOL>return (t0 / gamma) * (<NUM_LIT:1> - (pressure / p0).to('<STR_LIT>')**(<EOL>mpconsts.Rd * gamma / mpconsts.g))<EOL>
r"""Convert pressure data to heights using the U.S. standard atmosphere. The implementation uses the formula outlined in [Hobbs1977]_ pg.60-61. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressure Returns ------- `pint.Quantity` The corresponding height value(s) Notes ----- .. math:: Z = \frac{T_0}{\Gamma}[1-\frac{p}{p_0}^\frac{R\Gamma}{g}]
f8485:m9
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def height_to_geopotential(height):
<EOL>geopot = mpconsts.G * mpconsts.me * ((<NUM_LIT:1> / mpconsts.Re) - (<NUM_LIT:1> / (mpconsts.Re + height)))<EOL>return geopot<EOL>
r"""Compute geopotential for a given height. Parameters ---------- height : `pint.Quantity` Height above sea level (array_like) Returns ------- `pint.Quantity` The corresponding geopotential value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8.
f8485:m10
@exporter.export<EOL>@preprocess_xarray<EOL>def geopotential_to_height(geopot):
<EOL>height = (((<NUM_LIT:1> / mpconsts.Re) - (geopot / (mpconsts.G * mpconsts.me))) ** -<NUM_LIT:1>) - mpconsts.Re<EOL>return height<EOL>
r"""Compute height from a given geopotential. Parameters ---------- geopotential : `pint.Quantity` Geopotential (array_like) Returns ------- `pint.Quantity` The corresponding height value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> >>> height = metpy.calc.geopotential_to_height(geopot) >>> height <Quantity([ 0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000. 9000. 10000.], 'meter')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8.
f8485:m11
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def height_to_pressure_std(height):
t0 = <NUM_LIT> * units.kelvin<EOL>gamma = <NUM_LIT> * units('<STR_LIT>')<EOL>p0 = <NUM_LIT> * units.mbar<EOL>return p0 * (<NUM_LIT:1> - (gamma / t0) * height) ** (mpconsts.g / (mpconsts.Rd * gamma))<EOL>
r"""Convert height data to pressures using the U.S. standard atmosphere. The implementation inverts the formula outlined in [Hobbs1977]_ pg.60-61. Parameters ---------- height : `pint.Quantity` Atmospheric height Returns ------- `pint.Quantity` The corresponding pressure value(s) Notes ----- .. math:: p = p_0 e^{\frac{g}{R \Gamma} \text{ln}(1-\frac{Z \Gamma}{T_0})}
f8485:m12
@exporter.export<EOL>@preprocess_xarray<EOL>def coriolis_parameter(latitude):
latitude = _check_radians(latitude, max_radians=np.pi / <NUM_LIT:2>)<EOL>return (<NUM_LIT> * mpconsts.omega * np.sin(latitude)).to('<STR_LIT>')<EOL>
r"""Calculate the coriolis parameter at each point. The implementation uses the formula outlined in [Hobbs1977]_ pg.370-371. Parameters ---------- latitude : array_like Latitude at each point Returns ------- `pint.Quantity` The corresponding coriolis force at each point
f8485:m13
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def add_height_to_pressure(pressure, height):
pressure_level_height = pressure_to_height_std(pressure)<EOL>return height_to_pressure_std(pressure_level_height + height)<EOL>
r"""Calculate the pressure at a certain height above another pressure level. This assumes a standard atmosphere. Parameters ---------- pressure : `pint.Quantity` Pressure level height : `pint.Quantity` Height above a pressure level Returns ------- `pint.Quantity` The corresponding pressure value for the height above the pressure level See Also -------- pressure_to_height_std, height_to_pressure_std, add_pressure_to_height
f8485:m14
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>')<EOL>def add_pressure_to_height(height, pressure):
pressure_at_height = height_to_pressure_std(height)<EOL>return pressure_to_height_std(pressure_at_height - pressure)<EOL>
r"""Calculate the height at a certain pressure above another height. This assumes a standard atmosphere. Parameters ---------- height : `pint.Quantity` Height level pressure : `pint.Quantity` Pressure above height level Returns ------- `pint.Quantity` The corresponding height value for the pressure above the height level See Also -------- pressure_to_height_std, height_to_pressure_std, add_height_to_pressure
f8485:m15
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>', '<STR_LIT>', '<STR_LIT>')<EOL>def sigma_to_pressure(sigma, psfc, ptop):
if np.any(sigma < <NUM_LIT:0>) or np.any(sigma > <NUM_LIT:1>):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if psfc.magnitude < <NUM_LIT:0> or ptop.magnitude < <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>return sigma * (psfc - ptop) + ptop<EOL>
r"""Calculate pressure from sigma values. Parameters ---------- sigma : ndarray The sigma levels to be converted to pressure levels. psfc : `pint.Quantity` The surface pressure value. ptop : `pint.Quantity` The pressure value at the top of the model domain. Returns ------- `pint.Quantity` The pressure values at the given sigma levels. Notes ----- Sigma definition adapted from [Philips1957]_. .. math:: p = \sigma * (p_{sfc} - p_{top}) + p_{top} * :math:`p` is pressure at a given `\sigma` level * :math:`\sigma` is non-dimensional, scaled pressure * :math:`p_{sfc}` is pressure at the surface or model floor * :math:`p_{top}` is pressure at the top of the model domain
f8485:m16
@exporter.export<EOL>@preprocess_xarray<EOL>def smooth_gaussian(scalar_grid, n):
<EOL>n = int(round(n))<EOL>if n < <NUM_LIT:2>:<EOL><INDENT>n = <NUM_LIT:2><EOL><DEDENT>sgma = n / (<NUM_LIT:2> * np.pi)<EOL>nax = len(scalar_grid.shape)<EOL>sgma_seq = [sgma if i > nax - <NUM_LIT:3> else <NUM_LIT:0> for i in range(nax)]<EOL>res = gaussian_filter(scalar_grid, sgma_seq, truncate=<NUM_LIT:2> * np.sqrt(<NUM_LIT:2>))<EOL>if hasattr(scalar_grid, '<STR_LIT>'):<EOL><INDENT>res = res * scalar_grid.units<EOL><DEDENT>return res<EOL>
Filter with normal distribution of weights. Parameters ---------- scalar_grid : `pint.Quantity` Some n-dimensional scalar grid. If more than two axes, smoothing is only done across the last two. n : int Degree of filtering Returns ------- `pint.Quantity` The filtered 2D scalar grid Notes ----- This function is a close replication of the GEMPAK function GWFS, but is not identical. The following notes are incorporated from the GEMPAK source code: This function smoothes a scalar grid using a moving average low-pass filter whose weights are determined by the normal (Gaussian) probability distribution function for two dimensions. The weight given to any grid point within the area covered by the moving average for a target grid point is proportional to EXP [ -( D ** 2 ) ], where D is the distance from that point to the target point divided by the standard deviation of the normal distribution. The value of the standard deviation is determined by the degree of filtering requested. The degree of filtering is specified by an integer. This integer is the number of grid increments from crest to crest of the wave for which the theoretical response is 1/e = .3679. If the grid increment is called delta_x, and the value of this integer is represented by N, then the theoretical filter response function value for the N * delta_x wave will be 1/e. The actual response function will be greater than the theoretical value. The larger N is, the more severe the filtering will be, because the response function for all wavelengths shorter than N * delta_x will be less than 1/e. Furthermore, as N is increased, the slope of the filter response function becomes more shallow; so, the response at all wavelengths decreases, but the amount of decrease lessens with increasing wavelength. (The theoretical response function can be obtained easily--it is the Fourier transform of the weight function described above.) The area of the patch covered by the moving average varies with N. As N gets bigger, the smoothing gets stronger, and weight values farther from the target grid point are larger because the standard deviation of the normal distribution is bigger. Thus, increasing N has the effect of expanding the moving average window as well as changing the values of weights. The patch is a square covering all points whose weight values are within two standard deviations of the mean of the two dimensional normal distribution. The key difference between GEMPAK's GWFS and this function is that, in GEMPAK, the leftover weight values representing the fringe of the distribution are applied to the target grid point. In this function, the leftover weights are not used. When this function is invoked, the first argument is the grid to be smoothed, the second is the value of N as described above: GWFS ( S, N ) where N > 1. If N <= 1, N = 2 is assumed. For example, if N = 4, then the 4 delta x wave length is passed with approximate response 1/e.
f8485:m17
@exporter.export<EOL>@preprocess_xarray<EOL>def smooth_n_point(scalar_grid, n=<NUM_LIT:5>, passes=<NUM_LIT:1>):
if n == <NUM_LIT:9>:<EOL><INDENT>p = <NUM_LIT><EOL>q = <NUM_LIT><EOL>r = <NUM_LIT><EOL><DEDENT>elif n == <NUM_LIT:5>:<EOL><INDENT>p = <NUM_LIT:0.5><EOL>q = <NUM_LIT><EOL>r = <NUM_LIT:0.0><EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>smooth_grid = scalar_grid[:].copy()<EOL>for _i in range(passes):<EOL><INDENT>smooth_grid[<NUM_LIT:1>:-<NUM_LIT:1>, <NUM_LIT:1>:-<NUM_LIT:1>] = (p * smooth_grid[<NUM_LIT:1>:-<NUM_LIT:1>, <NUM_LIT:1>:-<NUM_LIT:1>]<EOL>+ q * (smooth_grid[<NUM_LIT:2>:, <NUM_LIT:1>:-<NUM_LIT:1>] + smooth_grid[<NUM_LIT:1>:-<NUM_LIT:1>, <NUM_LIT:2>:]<EOL>+ smooth_grid[:-<NUM_LIT:2>, <NUM_LIT:1>:-<NUM_LIT:1>] + smooth_grid[<NUM_LIT:1>:-<NUM_LIT:1>, :-<NUM_LIT:2>])<EOL>+ r * (smooth_grid[<NUM_LIT:2>:, <NUM_LIT:2>:] + smooth_grid[<NUM_LIT:2>:, :-<NUM_LIT:2>] +<EOL>+ smooth_grid[:-<NUM_LIT:2>, <NUM_LIT:2>:] + smooth_grid[:-<NUM_LIT:2>, :-<NUM_LIT:2>]))<EOL><DEDENT>return smooth_grid<EOL>
Filter with normal distribution of weights. Parameters ---------- scalar_grid : array-like or `pint.Quantity` Some 2D scalar grid to be smoothed. n: int The number of points to use in smoothing, only valid inputs are 5 and 9. Defaults to 5. passes : int The number of times to apply the filter to the grid. Defaults to 1. Returns ------- array-like or `pint.Quantity` The filtered 2D scalar grid. Notes ----- This function is a close replication of the GEMPAK function SM5S and SM9S depending on the choice of the number of points to use for smoothing. This function can be applied multiple times to create a more smoothed field and will only smooth the interior points, leaving the end points with their original values. If a masked value or NaN values exists in the array, it will propagate to any point that uses that particular grid point in the smoothing calculation. Applying the smoothing function multiple times will propogate NaNs further throughout the domain.
f8485:m18
def _check_radians(value, max_radians=<NUM_LIT:2> * np.pi):
try:<EOL><INDENT>value = value.to('<STR_LIT>').m<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT>if np.greater(np.nanmax(np.abs(value)), max_radians):<EOL><INDENT>warnings.warn('<STR_LIT>'<EOL>'<STR_LIT>'.format(max_radians))<EOL><DEDENT>return value<EOL>
Input validation of values that could be in degrees instead of radians. Parameters ---------- value : `pint.Quantity` The input value to check. max_radians : float Maximum absolute value of radians before warning. Returns ------- `pint.Quantity` The input value
f8485:m19
def distances_from_cross_section(cross):
if (CFConventionHandler.check_axis(cross.metpy.x, '<STR_LIT>')<EOL>and CFConventionHandler.check_axis(cross.metpy.y, '<STR_LIT>')):<EOL><INDENT>from pyproj import Geod<EOL>g = Geod(cross.metpy.cartopy_crs.proj4_init)<EOL>lon = cross.metpy.x<EOL>lat = cross.metpy.y<EOL>forward_az, _, distance = g.inv(lon[<NUM_LIT:0>].values * np.ones_like(lon),<EOL>lat[<NUM_LIT:0>].values * np.ones_like(lat),<EOL>lon.values,<EOL>lat.values)<EOL>x = distance * np.sin(np.deg2rad(forward_az))<EOL>y = distance * np.cos(np.deg2rad(forward_az))<EOL>x = xr.DataArray(x, coords=lon.coords, dims=lon.dims, attrs={'<STR_LIT>': '<STR_LIT>'})<EOL>y = xr.DataArray(y, coords=lat.coords, dims=lat.dims, attrs={'<STR_LIT>': '<STR_LIT>'})<EOL><DEDENT>elif (CFConventionHandler.check_axis(cross.metpy.x, '<STR_LIT:x>')<EOL>and CFConventionHandler.check_axis(cross.metpy.y, '<STR_LIT:y>')):<EOL><INDENT>x = cross.metpy.x<EOL>y = cross.metpy.y<EOL><DEDENT>else:<EOL><INDENT>raise AttributeError('<STR_LIT>')<EOL><DEDENT>return x, y<EOL>
Calculate the distances in the x and y directions along a cross-section. Parameters ---------- cross : `xarray.DataArray` The input DataArray of a cross-section from which to obtain geometeric distances in the x and y directions. Returns ------- x, y : tuple of `xarray.DataArray` A tuple of the x and y distances as DataArrays
f8486:m0
def latitude_from_cross_section(cross):
y = cross.metpy.y<EOL>if CFConventionHandler.check_axis(y, '<STR_LIT>'):<EOL><INDENT>return y<EOL><DEDENT>else:<EOL><INDENT>import cartopy.crs as ccrs<EOL>latitude = ccrs.Geodetic().transform_points(cross.metpy.cartopy_crs,<EOL>cross.metpy.x.values,<EOL>y.values)[..., <NUM_LIT:1>]<EOL>latitude = xr.DataArray(latitude, coords=y.coords, dims=y.dims,<EOL>attrs={'<STR_LIT>': '<STR_LIT>'})<EOL>return latitude<EOL><DEDENT>
Calculate the latitude of points in a cross-section. Parameters ---------- cross : `xarray.DataArray` The input DataArray of a cross-section from which to obtain latitudes. Returns ------- latitude : `xarray.DataArray` Latitude of points
f8486:m1
@exporter.export<EOL>def unit_vectors_from_cross_section(cross, index='<STR_LIT:index>'):
x, y = distances_from_cross_section(cross)<EOL>dx_di = first_derivative(x, axis=index).values<EOL>dy_di = first_derivative(y, axis=index).values<EOL>tangent_vector_mag = np.hypot(dx_di, dy_di)<EOL>unit_tangent_vector = np.vstack([dx_di / tangent_vector_mag, dy_di / tangent_vector_mag])<EOL>unit_normal_vector = np.vstack([-dy_di / tangent_vector_mag, dx_di / tangent_vector_mag])<EOL>return unit_tangent_vector, unit_normal_vector<EOL>
r"""Calculate the unit tanget and unit normal vectors from a cross-section. Given a path described parametrically by :math:`\vec{l}(i) = (x(i), y(i))`, we can find the unit tangent vector by the formula .. math:: \vec{T}(i) = \frac{1}{\sqrt{\left( \frac{dx}{di} \right)^2 + \left( \frac{dy}{di} \right)^2}} \left( \frac{dx}{di}, \frac{dy}{di} \right) From this, because this is a two-dimensional path, the normal vector can be obtained by a simple :math:`\frac{\pi}{2}` rotation. Parameters ---------- cross : `xarray.DataArray` The input DataArray of a cross-section from which to obtain latitudes. index : `str`, optional A string denoting the index coordinate of the cross section, defaults to 'index' as set by `metpy.interpolate.cross_section`. Returns ------- unit_tangent_vector, unit_normal_vector : tuple of `numpy.ndarray` Arrays describing the unit tangent and unit normal vectors (in x,y) for all points along the cross section.
f8486:m2
@exporter.export<EOL>@check_matching_coordinates<EOL>def cross_section_components(data_x, data_y, index='<STR_LIT:index>'):
<EOL>unit_tang, unit_norm = unit_vectors_from_cross_section(data_x, index=index)<EOL>component_tang = data_x * unit_tang[<NUM_LIT:0>] + data_y * unit_tang[<NUM_LIT:1>]<EOL>component_norm = data_x * unit_norm[<NUM_LIT:0>] + data_y * unit_norm[<NUM_LIT:1>]<EOL>component_tang.attrs = {'<STR_LIT>': data_x.attrs['<STR_LIT>']}<EOL>component_norm.attrs = {'<STR_LIT>': data_x.attrs['<STR_LIT>']}<EOL>return component_tang, component_norm<EOL>
r"""Obtain the tangential and normal components of a cross-section of a vector field. Parameters ---------- data_x : `xarray.DataArray` The input DataArray of the x-component (in terms of data projection) of the vector field. data_y : `xarray.DataArray` The input DataArray of the y-component (in terms of data projection) of the vector field. Returns ------- component_tangential, component_normal: tuple of `xarray.DataArray` The components of the vector field in the tangential and normal directions, respectively. See Also -------- tangential_component, normal_component Notes ----- The coordinates of `data_x` and `data_y` must match.
f8486:m3
@exporter.export<EOL>@check_matching_coordinates<EOL>def normal_component(data_x, data_y, index='<STR_LIT:index>'):
<EOL>_, unit_norm = unit_vectors_from_cross_section(data_x, index=index)<EOL>component_norm = data_x * unit_norm[<NUM_LIT:0>] + data_y * unit_norm[<NUM_LIT:1>]<EOL>for attr in ('<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>if attr in data_x.attrs:<EOL><INDENT>component_norm.attrs[attr] = data_x.attrs[attr]<EOL><DEDENT><DEDENT>return component_norm<EOL>
r"""Obtain the normal component of a cross-section of a vector field. Parameters ---------- data_x : `xarray.DataArray` The input DataArray of the x-component (in terms of data projection) of the vector field. data_y : `xarray.DataArray` The input DataArray of the y-component (in terms of data projection) of the vector field. Returns ------- component_normal: `xarray.DataArray` The component of the vector field in the normal directions. See Also -------- cross_section_components, tangential_component Notes ----- The coordinates of `data_x` and `data_y` must match.
f8486:m4
@exporter.export<EOL>@check_matching_coordinates<EOL>def tangential_component(data_x, data_y, index='<STR_LIT:index>'):
<EOL>unit_tang, _ = unit_vectors_from_cross_section(data_x, index=index)<EOL>component_tang = data_x * unit_tang[<NUM_LIT:0>] + data_y * unit_tang[<NUM_LIT:1>]<EOL>for attr in ('<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>if attr in data_x.attrs:<EOL><INDENT>component_tang.attrs[attr] = data_x.attrs[attr]<EOL><DEDENT><DEDENT>return component_tang<EOL>
r"""Obtain the tangential component of a cross-section of a vector field. Parameters ---------- data_x : `xarray.DataArray` The input DataArray of the x-component (in terms of data projection) of the vector field. data_y : `xarray.DataArray` The input DataArray of the y-component (in terms of data projection) of the vector field. Returns ------- component_tangential: `xarray.DataArray` The component of the vector field in the tangential directions. See Also -------- cross_section_components, normal_component Notes ----- The coordinates of `data_x` and `data_y` must match.
f8486:m5
@exporter.export<EOL>@check_matching_coordinates<EOL>def absolute_momentum(u_wind, v_wind, index='<STR_LIT:index>'):
<EOL>norm_wind = normal_component(u_wind, v_wind, index=index)<EOL>norm_wind.metpy.convert_units('<STR_LIT>')<EOL>latitude = latitude_from_cross_section(norm_wind) <EOL>_, latitude = xr.broadcast(norm_wind, latitude)<EOL>f = coriolis_parameter(np.deg2rad(latitude.values)).magnitude <EOL>x, y = distances_from_cross_section(norm_wind)<EOL>x.metpy.convert_units('<STR_LIT>')<EOL>y.metpy.convert_units('<STR_LIT>')<EOL>_, x, y = xr.broadcast(norm_wind, x, y)<EOL>distance = np.hypot(x, y).values <EOL>m = norm_wind + f * distance<EOL>m.attrs = {'<STR_LIT>': norm_wind.attrs['<STR_LIT>']}<EOL>return m<EOL>
r"""Calculate cross-sectional absolute momentum (also called pseudoangular momentum). As given in [Schultz1999]_, absolute momentum (also called pseudoangular momentum) is given by .. math:: M = v + fx where :math:`v` is the along-front component of the wind and :math:`x` is the cross-front distance. Applied to a cross-section taken perpendicular to the front, :math:`v` becomes the normal component of the wind and :math:`x` the tangential distance. If using this calculation in assessing symmetric instability, geostrophic wind should be used so that geostrophic absolute momentum :math:`\left(M_g\right)` is obtained, as described in [Schultz1999]_. Parameters ---------- u_wind : `xarray.DataArray` The input DataArray of the x-component (in terms of data projection) of the wind. v_wind : `xarray.DataArray` The input DataArray of the y-component (in terms of data projection) of the wind. Returns ------- absolute_momentum: `xarray.DataArray` The absolute momentum Notes ----- The coordinates of `u_wind` and `v_wind` must match.
f8486:m6
@exporter.export<EOL>@preprocess_xarray<EOL>def resample_nn_1d(a, centers):
ix = []<EOL>for center in centers:<EOL><INDENT>index = (np.abs(a - center)).argmin()<EOL>if index not in ix:<EOL><INDENT>ix.append(index)<EOL><DEDENT><DEDENT>return ix<EOL>
Return one-dimensional nearest-neighbor indexes based on user-specified centers. Parameters ---------- a : array-like 1-dimensional array of numeric values from which to extract indexes of nearest-neighbors centers : array-like 1-dimensional array of numeric values representing a subset of values to approximate Returns ------- An array of indexes representing values closest to given array values
f8488:m0
@exporter.export<EOL>@preprocess_xarray<EOL>def nearest_intersection_idx(a, b):
<EOL>difference = a - b<EOL>sign_change_idx, = np.nonzero(np.diff(np.sign(difference)))<EOL>return sign_change_idx<EOL>
Determine the index of the point just before two lines with common x values. Parameters ---------- a : array-like 1-dimensional array of y-values for line 1 b : array-like 1-dimensional array of y-values for line 2 Returns ------- An array of indexes representing the index of the values just before the intersection(s) of the two lines.
f8488:m1
@exporter.export<EOL>@preprocess_xarray<EOL>@units.wraps(('<STR_LIT>', '<STR_LIT>'), ('<STR_LIT>', '<STR_LIT>', '<STR_LIT>'))<EOL>def find_intersections(x, a, b, direction='<STR_LIT:all>'):
<EOL>nearest_idx = nearest_intersection_idx(a, b)<EOL>next_idx = nearest_idx + <NUM_LIT:1><EOL>sign_change = np.sign(a[next_idx] - b[next_idx])<EOL>_, x0 = _next_non_masked_element(x, nearest_idx)<EOL>_, x1 = _next_non_masked_element(x, next_idx)<EOL>_, a0 = _next_non_masked_element(a, nearest_idx)<EOL>_, a1 = _next_non_masked_element(a, next_idx)<EOL>_, b0 = _next_non_masked_element(b, nearest_idx)<EOL>_, b1 = _next_non_masked_element(b, next_idx)<EOL>delta_y0 = a0 - b0<EOL>delta_y1 = a1 - b1<EOL>intersect_x = (delta_y1 * x0 - delta_y0 * x1) / (delta_y1 - delta_y0)<EOL>intersect_y = ((intersect_x - x0) / (x1 - x0)) * (a1 - a0) + a0<EOL>if len(intersect_x) == <NUM_LIT:0>:<EOL><INDENT>return intersect_x, intersect_y<EOL><DEDENT>duplicate_mask = (np.ediff1d(intersect_x, to_end=<NUM_LIT:1>) != <NUM_LIT:0>)<EOL>if direction == '<STR_LIT>':<EOL><INDENT>mask = sign_change > <NUM_LIT:0><EOL><DEDENT>elif direction == '<STR_LIT>':<EOL><INDENT>mask = sign_change < <NUM_LIT:0><EOL><DEDENT>elif direction == '<STR_LIT:all>':<EOL><INDENT>return intersect_x[duplicate_mask], intersect_y[duplicate_mask]<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(str(direction)))<EOL><DEDENT>return intersect_x[mask & duplicate_mask], intersect_y[mask & duplicate_mask]<EOL>
Calculate the best estimate of intersection. Calculates the best estimates of the intersection of two y-value data sets that share a common x-value set. Parameters ---------- x : array-like 1-dimensional array of numeric x-values a : array-like 1-dimensional array of y-values for line 1 b : array-like 1-dimensional array of y-values for line 2 direction : string, optional specifies direction of crossing. 'all', 'increasing' (a becoming greater than b), or 'decreasing' (b becoming greater than a). Defaults to 'all'. Returns ------- A tuple (x, y) of array-like with the x and y coordinates of the intersections of the lines.
f8488:m2
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum=('<STR_LIT>'<EOL>'<STR_LIT>'), pending=False)<EOL>def interpolate_nans(x, y, kind='<STR_LIT>'):
return interpolate_nans_1d(x, y, kind=kind)<EOL>
Wrap interpolate_nans_1d for deprecated interpolate_nans.
f8488:m3
def _next_non_masked_element(a, idx):
try:<EOL><INDENT>next_idx = idx + a[idx:].mask.argmin()<EOL>if ma.is_masked(a[next_idx]):<EOL><INDENT>return None, None<EOL><DEDENT>else:<EOL><INDENT>return next_idx, a[next_idx]<EOL><DEDENT><DEDENT>except (AttributeError, TypeError, IndexError):<EOL><INDENT>return idx, a[idx]<EOL><DEDENT>
Return the next non masked element of a masked array. If an array is masked, return the next non-masked element (if the given index is masked). If no other unmasked points are after the given masked point, returns none. Parameters ---------- a : array-like 1-dimensional array of numeric values idx : integer index of requested element Returns ------- Index of next non-masked element and next non-masked element
f8488:m4
def _delete_masked_points(*arrs):
if any(hasattr(a, '<STR_LIT>') for a in arrs):<EOL><INDENT>keep = ~functools.reduce(np.logical_or, (np.ma.getmaskarray(a) for a in arrs))<EOL>return tuple(ma.asarray(a[keep]) for a in arrs)<EOL><DEDENT>else:<EOL><INDENT>return arrs<EOL><DEDENT>
Delete masked points from arrays. Takes arrays and removes masked points to help with calculations and plotting. Parameters ---------- arrs : one or more array-like source arrays Returns ------- arrs : one or more array-like arrays with masked elements removed
f8488:m5
@exporter.export<EOL>@preprocess_xarray<EOL>def reduce_point_density(points, radius, priority=None):
<EOL>if points.ndim < <NUM_LIT:2>:<EOL><INDENT>points = points.reshape(-<NUM_LIT:1>, <NUM_LIT:1>)<EOL><DEDENT>tree = cKDTree(points)<EOL>if priority is not None:<EOL><INDENT>sorted_indices = np.argsort(priority)[::-<NUM_LIT:1>]<EOL><DEDENT>else:<EOL><INDENT>sorted_indices = range(len(points))<EOL><DEDENT>keep = np.ones(len(points), dtype=np.bool)<EOL>for ind in sorted_indices:<EOL><INDENT>if keep[ind]:<EOL><INDENT>neighbors = tree.query_ball_point(points[ind], radius)<EOL>keep[neighbors] = False<EOL>keep[ind] = True<EOL><DEDENT><DEDENT>return keep<EOL>
r"""Return a mask to reduce the density of points in irregularly-spaced data. This function is used to down-sample a collection of scattered points (e.g. surface data), returning a mask that can be used to select the points from one or more arrays (e.g. arrays of temperature and dew point). The points selected can be controlled by providing an array of ``priority`` values (e.g. rainfall totals to ensure that stations with higher precipitation remain in the mask). Parameters ---------- points : (N, K) array-like N locations of the points in K dimensional space radius : float minimum radius allowed between points priority : (N, K) array-like, optional If given, this should have the same shape as ``points``; these values will be used to control selection priority for points. Returns ------- (N,) array-like of boolean values indicating whether points should be kept. This can be used directly to index numpy arrays to return only the desired points. Examples -------- >>> metpy.calc.reduce_point_density(np.array([1, 2, 3]), 1.) array([ True, False, True]) >>> metpy.calc.reduce_point_density(np.array([1, 2, 3]), 1., ... priority=np.array([0.1, 0.9, 0.3])) array([False, True, False])
f8488:m6
def _get_bound_pressure_height(pressure, bound, heights=None, interpolate=True):
<EOL>sort_inds = np.argsort(pressure)[::-<NUM_LIT:1>]<EOL>pressure = pressure[sort_inds]<EOL>if heights is not None:<EOL><INDENT>heights = heights[sort_inds]<EOL><DEDENT>if bound.dimensionality == {'<STR_LIT>': -<NUM_LIT:1.0>, '<STR_LIT>': <NUM_LIT:1.0>, '<STR_LIT>': -<NUM_LIT>}:<EOL><INDENT>if bound in pressure:<EOL><INDENT>bound_pressure = bound<EOL>if heights is not None:<EOL><INDENT>bound_height = heights[pressure == bound_pressure]<EOL><DEDENT>else:<EOL><INDENT>bound_height = pressure_to_height_std(bound_pressure)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>if interpolate:<EOL><INDENT>bound_pressure = bound <EOL>if heights is not None: <EOL><INDENT>bound_height = log_interpolate_1d(bound_pressure, pressure, heights)<EOL><DEDENT>else: <EOL><INDENT>bound_height = pressure_to_height_std(bound_pressure)<EOL><DEDENT><DEDENT>else: <EOL><INDENT>idx = (np.abs(pressure - bound)).argmin()<EOL>bound_pressure = pressure[idx]<EOL>if heights is not None:<EOL><INDENT>bound_height = heights[idx]<EOL><DEDENT>else:<EOL><INDENT>bound_height = pressure_to_height_std(bound_pressure)<EOL><DEDENT><DEDENT><DEDENT><DEDENT>elif bound.dimensionality == {'<STR_LIT>': <NUM_LIT:1.0>}:<EOL><INDENT>if heights is not None:<EOL><INDENT>if bound in heights: <EOL><INDENT>bound_height = bound<EOL>bound_pressure = pressure[heights == bound]<EOL><DEDENT>else: <EOL><INDENT>if interpolate:<EOL><INDENT>bound_height = bound<EOL>bound_pressure = np.interp(np.atleast_1d(bound), heights,<EOL>pressure).astype(bound.dtype) * pressure.units<EOL><DEDENT>else:<EOL><INDENT>idx = (np.abs(heights - bound)).argmin()<EOL>bound_pressure = pressure[idx]<EOL>bound_height = heights[idx]<EOL><DEDENT><DEDENT><DEDENT>else: <EOL><INDENT>bound_height = bound<EOL>bound_pressure = height_to_pressure_std(bound)<EOL>if not interpolate:<EOL><INDENT>idx = (np.abs(pressure - bound_pressure)).argmin()<EOL>bound_pressure = pressure[idx]<EOL>bound_height = pressure_to_height_std(bound_pressure)<EOL><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if not (_greater_or_close(bound_pressure, np.nanmin(pressure) * pressure.units)<EOL>and _less_or_close(bound_pressure, np.nanmax(pressure) * pressure.units)):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if heights is not None:<EOL><INDENT>if not (_less_or_close(bound_height, np.nanmax(heights) * heights.units)<EOL>and _greater_or_close(bound_height, np.nanmin(heights) * heights.units)):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT><DEDENT>return bound_pressure, bound_height<EOL>
Calculate the bounding pressure and height in a layer. Given pressure, optional heights, and a bound, return either the closest pressure/height or interpolated pressure/height. If no heights are provided, a standard atmosphere is assumed. Parameters ---------- pressure : `pint.Quantity` Atmospheric pressures bound : `pint.Quantity` Bound to retrieve (in pressure or height) heights : `pint.Quantity`, optional Atmospheric heights associated with the pressure levels. Defaults to using heights calculated from ``pressure`` assuming a standard atmosphere. interpolate : boolean, optional Interpolate the bound or return the nearest. Defaults to True. Returns ------- `pint.Quantity` The bound pressure and height.
f8488:m7
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def get_layer_heights(heights, depth, *args, **kwargs):
bottom = kwargs.pop('<STR_LIT>', None)<EOL>interpolate = kwargs.pop('<STR_LIT>', True)<EOL>with_agl = kwargs.pop('<STR_LIT>', False)<EOL>for datavar in args:<EOL><INDENT>if len(heights) != len(datavar):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT><DEDENT>if with_agl:<EOL><INDENT>sfc_height = np.min(heights)<EOL>heights = heights - sfc_height<EOL><DEDENT>if bottom is None:<EOL><INDENT>bottom = heights[<NUM_LIT:0>]<EOL><DEDENT>heights = heights.to_base_units()<EOL>bottom = bottom.to_base_units()<EOL>top = bottom + depth<EOL>ret = [] <EOL>sort_inds = np.argsort(heights)<EOL>heights = heights[sort_inds]<EOL>inds = _greater_or_close(heights, bottom) & _less_or_close(heights, top)<EOL>heights_interp = heights[inds]<EOL>if interpolate:<EOL><INDENT>if top not in heights_interp:<EOL><INDENT>heights_interp = np.sort(np.append(heights_interp, top)) * heights.units<EOL><DEDENT>if bottom not in heights_interp:<EOL><INDENT>heights_interp = np.sort(np.append(heights_interp, bottom)) * heights.units<EOL><DEDENT><DEDENT>ret.append(heights_interp)<EOL>for datavar in args:<EOL><INDENT>datavar = datavar[sort_inds]<EOL>if interpolate:<EOL><INDENT>datavar_interp = interpolate_1d(heights_interp, heights, datavar)<EOL>datavar = datavar_interp<EOL><DEDENT>else:<EOL><INDENT>datavar = datavar[inds]<EOL><DEDENT>ret.append(datavar)<EOL><DEDENT>return ret<EOL>
Return an atmospheric layer from upper air data with the requested bottom and depth. This function will subset an upper air dataset to contain only the specified layer using the heights only. Parameters ---------- heights : array-like Atmospheric heights depth : `pint.Quantity` The thickness of the layer *args : array-like Atmospheric variable(s) measured at the given pressures bottom : `pint.Quantity`, optional The bottom of the layer interpolate : bool, optional Interpolate the top and bottom points if they are not in the given data. Defaults to True. with_agl : bool, optional Returns the heights as above ground level by subtracting the minimum height in the provided heights. Defaults to False. Returns ------- `pint.Quantity, pint.Quantity` The height and data variables of the layer
f8488:m8
@exporter.export<EOL>@preprocess_xarray<EOL>@check_units('<STR_LIT>')<EOL>def get_layer(pressure, *args, **kwargs):
<EOL>heights = kwargs.pop('<STR_LIT>', None)<EOL>bottom = kwargs.pop('<STR_LIT>', None)<EOL>depth = kwargs.pop('<STR_LIT>', <NUM_LIT:100> * units.hPa)<EOL>interpolate = kwargs.pop('<STR_LIT>', True)<EOL>if depth is None:<EOL><INDENT>depth = <NUM_LIT:100> * units.hPa<EOL><DEDENT>for datavar in args:<EOL><INDENT>if len(pressure) != len(datavar):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT><DEDENT>if bottom is None:<EOL><INDENT>bottom = np.nanmax(pressure) * pressure.units<EOL><DEDENT>bottom_pressure, bottom_height = _get_bound_pressure_height(pressure, bottom,<EOL>heights=heights,<EOL>interpolate=interpolate)<EOL>if depth.dimensionality == {'<STR_LIT>': -<NUM_LIT:1.0>, '<STR_LIT>': <NUM_LIT:1.0>, '<STR_LIT>': -<NUM_LIT>}:<EOL><INDENT>top = bottom_pressure - depth<EOL><DEDENT>elif depth.dimensionality == {'<STR_LIT>': <NUM_LIT:1>}:<EOL><INDENT>top = bottom_height + depth<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>top_pressure, _ = _get_bound_pressure_height(pressure, top, heights=heights,<EOL>interpolate=interpolate)<EOL>ret = [] <EOL>sort_inds = np.argsort(pressure)<EOL>pressure = pressure[sort_inds]<EOL>inds = (_less_or_close(pressure, bottom_pressure)<EOL>& _greater_or_close(pressure, top_pressure))<EOL>p_interp = pressure[inds]<EOL>if interpolate:<EOL><INDENT>if not np.any(np.isclose(top_pressure, p_interp)):<EOL><INDENT>p_interp = np.sort(np.append(p_interp, top_pressure)) * pressure.units<EOL><DEDENT>if not np.any(np.isclose(bottom_pressure, p_interp)):<EOL><INDENT>p_interp = np.sort(np.append(p_interp, bottom_pressure)) * pressure.units<EOL><DEDENT><DEDENT>ret.append(p_interp[::-<NUM_LIT:1>])<EOL>for datavar in args:<EOL><INDENT>datavar = datavar[sort_inds]<EOL>if interpolate:<EOL><INDENT>datavar_interp = log_interpolate_1d(p_interp, pressure, datavar)<EOL>datavar = datavar_interp<EOL><DEDENT>else:<EOL><INDENT>datavar = datavar[inds]<EOL><DEDENT>ret.append(datavar[::-<NUM_LIT:1>])<EOL><DEDENT>return ret<EOL>
r"""Return an atmospheric layer from upper air data with the requested bottom and depth. This function will subset an upper air dataset to contain only the specified layer. The bottom of the layer can be specified with a pressure or height above the surface pressure. The bottom defaults to the surface pressure. The depth of the layer can be specified in terms of pressure or height above the bottom of the layer. If the top and bottom of the layer are not in the data, they are interpolated by default. Parameters ---------- pressure : array-like Atmospheric pressure profile *args : array-like Atmospheric variable(s) measured at the given pressures heights: array-like, optional Atmospheric heights corresponding to the given pressures. Defaults to using heights calculated from ``p`` assuming a standard atmosphere. bottom : `pint.Quantity`, optional The bottom of the layer as a pressure or height above the surface pressure. Defaults to the highest pressure or lowest height given. depth : `pint.Quantity`, optional The thickness of the layer as a pressure or height above the bottom of the layer. Defaults to 100 hPa. interpolate : bool, optional Interpolate the top and bottom points if they are not in the given data. Defaults to True. Returns ------- `pint.Quantity, pint.Quantity` The pressure and data variables of the layer
f8488:m9
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum=('<STR_LIT>'<EOL>'<STR_LIT>'), pending=False)<EOL>def interp(x, xp, *args, **kwargs):
return interpolate_1d(x, xp, *args, **kwargs)<EOL>
Wrap interpolate_1d for deprecated interp.
f8488:m10
@exporter.export<EOL>@preprocess_xarray<EOL>def find_bounding_indices(arr, values, axis, from_below=True):
<EOL>indices_shape = list(arr.shape)<EOL>indices_shape[axis] = len(values)<EOL>indices = np.empty(indices_shape, dtype=np.int)<EOL>good = np.empty(indices_shape, dtype=np.bool)<EOL>store_slice = [slice(None)] * arr.ndim<EOL>for level_index, value in enumerate(values):<EOL><INDENT>switches = np.abs(np.diff((arr <= value).astype(np.int), axis=axis))<EOL>good_search = np.any(switches, axis=axis)<EOL>if from_below:<EOL><INDENT>index = switches.argmax(axis=axis) + <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>arr_slice = [slice(None)] * arr.ndim<EOL>arr_slice[axis] = slice(None, None, -<NUM_LIT:1>)<EOL>index = arr.shape[axis] - <NUM_LIT:1> - switches[tuple(arr_slice)].argmax(axis=axis)<EOL><DEDENT>index[~good_search] = <NUM_LIT:0><EOL>store_slice[axis] = level_index<EOL>indices[tuple(store_slice)] = index<EOL>good[tuple(store_slice)] = good_search<EOL><DEDENT>above = broadcast_indices(arr, indices, arr.ndim, axis)<EOL>below = broadcast_indices(arr, indices - <NUM_LIT:1>, arr.ndim, axis)<EOL>return above, below, good<EOL>
Find the indices surrounding the values within arr along axis. Returns a set of above, below, good. Above and below are lists of arrays of indices. These lists are formulated such that they can be used directly to index into a numpy array and get the expected results (no extra slices or ellipsis necessary). `good` is a boolean array indicating the "columns" that actually had values to bound the desired value(s). Parameters ---------- arr : array-like Array to search for values values: array-like One or more values to search for in `arr` axis : int The dimension of `arr` along which to search. from_below : bool, optional Whether to search from "below" (i.e. low indices to high indices). If `False`, the search will instead proceed from high indices to low indices. Defaults to `True`. Returns ------- above : list of arrays List of broadcasted indices to the location above the desired value below : list of arrays List of broadcasted indices to the location below the desired value good : array Boolean array indicating where the search found proper bounds for the desired value
f8488:m11
@exporter.export<EOL>@preprocess_xarray<EOL>@deprecated('<STR_LIT>', addendum=('<STR_LIT>'<EOL>'<STR_LIT>'), pending=False)<EOL>def log_interp(x, xp, *args, **kwargs):
return log_interpolate_1d(x, xp, *args, **kwargs)<EOL>
Wrap log_interpolate_1d for deprecated log_interp.
f8488:m12
def _greater_or_close(a, value, **kwargs):
return (a > value) | np.isclose(a, value, **kwargs)<EOL>
r"""Compare values for greater or close to boolean masks. Returns a boolean mask for values greater than or equal to a target within a specified absolute or relative tolerance (as in :func:`numpy.isclose`). Parameters ---------- a : array-like Array of values to be compared value : float Comparison value Returns ------- array-like Boolean array where values are greater than or nearly equal to value.
f8488:m13
def _less_or_close(a, value, **kwargs):
return (a < value) | np.isclose(a, value, **kwargs)<EOL>
r"""Compare values for less or close to boolean masks. Returns a boolean mask for values less than or equal to a target within a specified absolute or relative tolerance (as in :func:`numpy.isclose`). Parameters ---------- a : array-like Array of values to be compared value : float Comparison value Returns ------- array-like Boolean array where values are less than or nearly equal to value.
f8488:m14
@deprecated('<STR_LIT>', addendum='<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>',<EOL>pending=False)<EOL>@exporter.export<EOL>@preprocess_xarray<EOL>def lat_lon_grid_spacing(longitude, latitude, **kwargs):
<EOL>dx, dy = lat_lon_grid_deltas(longitude, latitude, **kwargs)<EOL>return np.abs(dx), np.abs(dy)<EOL>
r"""Calculate the distance between grid points that are in a latitude/longitude format. Calculate the distance between grid points when the grid spacing is defined by delta lat/lon rather than delta x/y Parameters ---------- longitude : array_like array of longitudes defining the grid latitude : array_like array of latitudes defining the grid kwargs Other keyword arguments to pass to :class:`~pyproj.Geod` Returns ------- dx, dy: 2D arrays of distances between grid points in the x and y direction Notes ----- Accepts, 1D or 2D arrays for latitude and longitude Assumes [Y, X] for 2D arrays .. deprecated:: 0.8.0 Function has been replaced with the signed delta distance calculation `lat_lon_grid_deltas` and will be removed from MetPy in 0.11.0.
f8488:m15
@exporter.export<EOL>@preprocess_xarray<EOL>def lat_lon_grid_deltas(longitude, latitude, **kwargs):
from pyproj import Geod<EOL>if latitude.ndim != longitude.ndim:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if latitude.ndim < <NUM_LIT:2>:<EOL><INDENT>longitude, latitude = np.meshgrid(longitude, latitude)<EOL><DEDENT>geod_args = {'<STR_LIT>': '<STR_LIT>'}<EOL>if kwargs:<EOL><INDENT>geod_args = kwargs<EOL><DEDENT>g = Geod(**geod_args)<EOL>forward_az, _, dy = g.inv(longitude[..., :-<NUM_LIT:1>, :], latitude[..., :-<NUM_LIT:1>, :],<EOL>longitude[..., <NUM_LIT:1>:, :], latitude[..., <NUM_LIT:1>:, :])<EOL>dy[(forward_az < -<NUM_LIT>) | (forward_az > <NUM_LIT>)] *= -<NUM_LIT:1><EOL>forward_az, _, dx = g.inv(longitude[..., :, :-<NUM_LIT:1>], latitude[..., :, :-<NUM_LIT:1>],<EOL>longitude[..., :, <NUM_LIT:1>:], latitude[..., :, <NUM_LIT:1>:])<EOL>dx[(forward_az < <NUM_LIT:0.>) | (forward_az > <NUM_LIT>)] *= -<NUM_LIT:1><EOL>return dx * units.meter, dy * units.meter<EOL>
r"""Calculate the delta between grid points that are in a latitude/longitude format. Calculate the signed delta distance between grid points when the grid spacing is defined by delta lat/lon rather than delta x/y Parameters ---------- longitude : array_like array of longitudes defining the grid latitude : array_like array of latitudes defining the grid kwargs Other keyword arguments to pass to :class:`~pyproj.Geod` Returns ------- dx, dy: at least two dimensional arrays of signed deltas between grid points in the x and y direction Notes ----- Accepts 1D, 2D, or higher arrays for latitude and longitude Assumes [..., Y, X] for >=2 dimensional arrays
f8488:m16
@exporter.export<EOL>def grid_deltas_from_dataarray(f):
if f.metpy.crs['<STR_LIT>'] == '<STR_LIT>':<EOL><INDENT>dx, dy = lat_lon_grid_deltas(f.metpy.x, f.metpy.y,<EOL>initstring=f.metpy.cartopy_crs.proj4_init)<EOL>slc_x = slc_y = tuple([np.newaxis] * (f.ndim - <NUM_LIT:2>) + [slice(None)] * <NUM_LIT:2>)<EOL><DEDENT>else:<EOL><INDENT>dx = np.diff(f.metpy.x.metpy.unit_array.to('<STR_LIT:m>').magnitude) * units('<STR_LIT:m>')<EOL>dy = np.diff(f.metpy.y.metpy.unit_array.to('<STR_LIT:m>').magnitude) * units('<STR_LIT:m>')<EOL>slc = [np.newaxis] * (f.ndim - <NUM_LIT:2>)<EOL>slc_x = tuple(slc + [np.newaxis, slice(None)])<EOL>slc_y = tuple(slc + [slice(None), np.newaxis])<EOL><DEDENT>return dx[slc_x], dy[slc_y]<EOL>
Calculate the horizontal deltas between grid points of a DataArray. Calculate the signed delta distance between grid points of a DataArray in the horizontal directions, whether the grid is lat/lon or x/y. Parameters ---------- f : `xarray.DataArray` Parsed DataArray on a latitude/longitude grid, in (..., lat, lon) or (..., y, x) dimension order Returns ------- dx, dy: arrays of signed deltas between grid points in the x and y directions with dimensions matching those of `f`. See Also -------- lat_lon_grid_deltas
f8488:m17
def xarray_derivative_wrap(func):
@functools.wraps(func)<EOL>def wrapper(f, **kwargs):<EOL><INDENT>if '<STR_LIT:x>' in kwargs or '<STR_LIT>' in kwargs:<EOL><INDENT>return preprocess_xarray(func)(f, **kwargs)<EOL><DEDENT>elif isinstance(f, xr.DataArray):<EOL><INDENT>axis = f.metpy.find_axis_name(kwargs.get('<STR_LIT>', <NUM_LIT:0>))<EOL>new_kwargs = {'<STR_LIT>': f.get_axis_num(axis)}<EOL>if f[axis].attrs.get('<STR_LIT>') == '<STR_LIT:T>':<EOL><INDENT>new_kwargs['<STR_LIT:x>'] = f[axis].metpy.as_timestamp().metpy.unit_array<EOL><DEDENT>elif CFConventionHandler.check_axis(f[axis], '<STR_LIT>'):<EOL><INDENT>new_kwargs['<STR_LIT>'], _ = grid_deltas_from_dataarray(f)<EOL><DEDENT>elif CFConventionHandler.check_axis(f[axis], '<STR_LIT>'):<EOL><INDENT>_, new_kwargs['<STR_LIT>'] = grid_deltas_from_dataarray(f)<EOL><DEDENT>else:<EOL><INDENT>new_kwargs['<STR_LIT:x>'] = f[axis].metpy.unit_array<EOL><DEDENT>result = func(f.metpy.unit_array, **new_kwargs)<EOL>return xr.DataArray(result.magnitude,<EOL>coords=f.coords,<EOL>dims=f.dims,<EOL>attrs={'<STR_LIT>': str(result.units)})<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT><DEDENT>return wrapper<EOL>
Decorate the derivative functions to make them work nicely with DataArrays. This will automatically determine if the coordinates can be pulled directly from the DataArray, or if a call to lat_lon_grid_deltas is needed.
f8488:m18
@exporter.export<EOL>@xarray_derivative_wrap<EOL>def first_derivative(f, **kwargs):
n, axis, delta = _process_deriv_args(f, kwargs)<EOL>slice0 = [slice(None)] * n<EOL>slice1 = [slice(None)] * n<EOL>slice2 = [slice(None)] * n<EOL>delta_slice0 = [slice(None)] * n<EOL>delta_slice1 = [slice(None)] * n<EOL>slice0[axis] = slice(None, -<NUM_LIT:2>)<EOL>slice1[axis] = slice(<NUM_LIT:1>, -<NUM_LIT:1>)<EOL>slice2[axis] = slice(<NUM_LIT:2>, None)<EOL>delta_slice0[axis] = slice(None, -<NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(<NUM_LIT:1>, None)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>delta_diff = delta[tuple(delta_slice1)] - delta[tuple(delta_slice0)]<EOL>center = (- delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])<EOL>* f[tuple(slice0)]<EOL>+ delta_diff / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice1)]<EOL>+ delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice2)])<EOL>slice0[axis] = slice(None, <NUM_LIT:1>)<EOL>slice1[axis] = slice(<NUM_LIT:1>, <NUM_LIT:2>)<EOL>slice2[axis] = slice(<NUM_LIT:2>, <NUM_LIT:3>)<EOL>delta_slice0[axis] = slice(None, <NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(<NUM_LIT:1>, <NUM_LIT:2>)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>big_delta = combined_delta + delta[tuple(delta_slice0)]<EOL>left = (- big_delta / (combined_delta * delta[tuple(delta_slice0)])<EOL>* f[tuple(slice0)]<EOL>+ combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice1)]<EOL>- delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice2)])<EOL>slice0[axis] = slice(-<NUM_LIT:3>, -<NUM_LIT:2>)<EOL>slice1[axis] = slice(-<NUM_LIT:2>, -<NUM_LIT:1>)<EOL>slice2[axis] = slice(-<NUM_LIT:1>, None)<EOL>delta_slice0[axis] = slice(-<NUM_LIT:2>, -<NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(-<NUM_LIT:1>, None)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>big_delta = combined_delta + delta[tuple(delta_slice1)]<EOL>right = (delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])<EOL>* f[tuple(slice0)]<EOL>- combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice1)]<EOL>+ big_delta / (combined_delta * delta[tuple(delta_slice1)])<EOL>* f[tuple(slice2)])<EOL>return concatenate((left, center, right), axis=axis)<EOL>
Calculate the first derivative of a grid of values. Works for both regularly-spaced data and grids with varying spacing. Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or `delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached coordinate information belonging to `axis` will be used and the derivative will be returned as an `xarray.DataArray`. This uses 3 points to calculate the derivative, using forward or backward at the edges of the grid as appropriate, and centered elsewhere. The irregular spacing is handled explicitly, using the formulation as specified by [Bowen2005]_. Parameters ---------- f : array-like Array of values of which to calculate the derivative axis : int or str, optional The array axis along which to take the derivative. If `f` is ndarray-like, must be an integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate dimension name or the axis type) or integer (referring to axis number), unless using implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults to 0. x : array-like, optional The coordinate values corresponding to the grid points in `f`. delta : array-like, optional Spacing between the grid points in `f`. Should be one item less than the size of `f` along `axis`. Returns ------- array-like The first derivative calculated along the selected axis. See Also -------- second_derivative
f8488:m19
@exporter.export<EOL>@xarray_derivative_wrap<EOL>def second_derivative(f, **kwargs):
n, axis, delta = _process_deriv_args(f, kwargs)<EOL>slice0 = [slice(None)] * n<EOL>slice1 = [slice(None)] * n<EOL>slice2 = [slice(None)] * n<EOL>delta_slice0 = [slice(None)] * n<EOL>delta_slice1 = [slice(None)] * n<EOL>slice0[axis] = slice(None, -<NUM_LIT:2>)<EOL>slice1[axis] = slice(<NUM_LIT:1>, -<NUM_LIT:1>)<EOL>slice2[axis] = slice(<NUM_LIT:2>, None)<EOL>delta_slice0[axis] = slice(None, -<NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(<NUM_LIT:1>, None)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>center = <NUM_LIT:2> * (f[tuple(slice0)] / (combined_delta * delta[tuple(delta_slice0)])<EOL>- f[tuple(slice1)] / (delta[tuple(delta_slice0)]<EOL>* delta[tuple(delta_slice1)])<EOL>+ f[tuple(slice2)] / (combined_delta * delta[tuple(delta_slice1)]))<EOL>slice0[axis] = slice(None, <NUM_LIT:1>)<EOL>slice1[axis] = slice(<NUM_LIT:1>, <NUM_LIT:2>)<EOL>slice2[axis] = slice(<NUM_LIT:2>, <NUM_LIT:3>)<EOL>delta_slice0[axis] = slice(None, <NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(<NUM_LIT:1>, <NUM_LIT:2>)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>left = <NUM_LIT:2> * (f[tuple(slice0)] / (combined_delta * delta[tuple(delta_slice0)])<EOL>- f[tuple(slice1)] / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])<EOL>+ f[tuple(slice2)] / (combined_delta * delta[tuple(delta_slice1)]))<EOL>slice0[axis] = slice(-<NUM_LIT:3>, -<NUM_LIT:2>)<EOL>slice1[axis] = slice(-<NUM_LIT:2>, -<NUM_LIT:1>)<EOL>slice2[axis] = slice(-<NUM_LIT:1>, None)<EOL>delta_slice0[axis] = slice(-<NUM_LIT:2>, -<NUM_LIT:1>)<EOL>delta_slice1[axis] = slice(-<NUM_LIT:1>, None)<EOL>combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]<EOL>right = <NUM_LIT:2> * (f[tuple(slice0)] / (combined_delta * delta[tuple(delta_slice0)])<EOL>- f[tuple(slice1)] / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])<EOL>+ f[tuple(slice2)] / (combined_delta * delta[tuple(delta_slice1)]))<EOL>return concatenate((left, center, right), axis=axis)<EOL>
Calculate the second derivative of a grid of values. Works for both regularly-spaced data and grids with varying spacing. Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or `delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached coordinate information belonging to `axis` will be used and the derivative will be returned as an `xarray.DataArray`. This uses 3 points to calculate the derivative, using forward or backward at the edges of the grid as appropriate, and centered elsewhere. The irregular spacing is handled explicitly, using the formulation as specified by [Bowen2005]_. Parameters ---------- f : array-like Array of values of which to calculate the derivative axis : int or str, optional The array axis along which to take the derivative. If `f` is ndarray-like, must be an integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate dimension name or the axis type) or integer (referring to axis number), unless using implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults to 0. x : array-like, optional The coordinate values corresponding to the grid points in `f`. delta : array-like, optional Spacing between the grid points in `f`. There should be one item less than the size of `f` along `axis`. Returns ------- array-like The second derivative calculated along the selected axis. See Also -------- first_derivative
f8488:m20
@exporter.export<EOL>def gradient(f, **kwargs):
pos_kwarg, positions, axes = _process_gradient_args(f, kwargs)<EOL>return tuple(first_derivative(f, axis=axis, **{pos_kwarg: positions[ind]})<EOL>for ind, axis in enumerate(axes))<EOL>
Calculate the gradient of a grid of values. Works for both regularly-spaced data, and grids with varying spacing. Either `coordinates` or `deltas` must be specified, or `f` must be given as an `xarray.DataArray` with attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `coordinates` or `deltas` are given, `f` will be converted to a `pint.Quantity` and the gradient returned as a tuple of `pint.Quantity`, otherwise, if neither `coordinates` nor `deltas` are given, the attached coordinate information belonging to `axis` will be used and the gradient will be returned as a tuple of `xarray.DataArray`. Parameters ---------- f : array-like Array of values of which to calculate the derivative coordinates : array-like, optional Sequence of arrays containing the coordinate values corresponding to the grid points in `f` in axis order. deltas : array-like, optional Sequence of arrays or scalars that specify the spacing between the grid points in `f` in axis order. There should be one item less than the size of `f` along the applicable axis. axes : sequence, optional Sequence of strings (if `f` is a `xarray.DataArray` and implicit conversion to `pint.Quantity` is not used) or integers that specify the array axes along which to take the derivatives. Defaults to all axes of `f`. If given, and used with `coordinates` or `deltas`, its length must be less than or equal to that of the `coordinates` or `deltas` given. Returns ------- tuple of array-like The first derivative calculated along each specified axis of the original array See Also -------- laplacian, first_derivative Notes ----- `gradient` previously accepted `x` as a parameter for coordinate values. This has been deprecated in 0.9 in favor of `coordinates`. If this function is used without the `axes` parameter, the length of `coordinates` or `deltas` (as applicable) should match the number of dimensions of `f`.
f8488:m21
@exporter.export<EOL>def laplacian(f, **kwargs):
pos_kwarg, positions, axes = _process_gradient_args(f, kwargs)<EOL>derivs = [second_derivative(f, axis=axis, **{pos_kwarg: positions[ind]})<EOL>for ind, axis in enumerate(axes)]<EOL>laplac = sum(derivs)<EOL>if isinstance(derivs[<NUM_LIT:0>], xr.DataArray):<EOL><INDENT>laplac.attrs['<STR_LIT>'] = derivs[<NUM_LIT:0>].attrs['<STR_LIT>']<EOL><DEDENT>return laplac<EOL>
Calculate the laplacian of a grid of values. Works for both regularly-spaced data, and grids with varying spacing. Either `coordinates` or `deltas` must be specified, or `f` must be given as an `xarray.DataArray` with attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `coordinates` or `deltas` are given, `f` will be converted to a `pint.Quantity` and the gradient returned as a tuple of `pint.Quantity`, otherwise, if neither `coordinates` nor `deltas` are given, the attached coordinate information belonging to `axis` will be used and the gradient will be returned as a tuple of `xarray.DataArray`. Parameters ---------- f : array-like Array of values of which to calculate the derivative coordinates : array-like, optional The coordinate values corresponding to the grid points in `f` deltas : array-like, optional Spacing between the grid points in `f`. There should be one item less than the size of `f` along the applicable axis. axes : sequence, optional Sequence of strings (if `f` is a `xarray.DataArray` and implicit conversion to `pint.Quantity` is not used) or integers that specify the array axes along which to take the derivatives. Defaults to all axes of `f`. If given, and used with `coordinates` or `deltas`, its length must be less than or equal to that of the `coordinates` or `deltas` given. Returns ------- array-like The laplacian See Also -------- gradient, second_derivative Notes ----- `laplacian` previously accepted `x` as a parameter for coordinate values. This has been deprecated in 0.9 in favor of `coordinates`. If this function is used without the `axes` parameter, the length of `coordinates` or `deltas` (as applicable) should match the number of dimensions of `f`.
f8488:m22
def _broadcast_to_axis(arr, axis, ndim):
if arr.ndim == <NUM_LIT:1> and arr.ndim < ndim:<EOL><INDENT>new_shape = [<NUM_LIT:1>] * ndim<EOL>new_shape[axis] = arr.size<EOL>arr = arr.reshape(*new_shape)<EOL><DEDENT>return arr<EOL>
Handle reshaping coordinate array to have proper dimensionality. This puts the values along the specified axis.
f8488:m23
def _process_gradient_args(f, kwargs):
axes = kwargs.get('<STR_LIT>', range(f.ndim))<EOL>def _check_length(positions):<EOL><INDENT>if '<STR_LIT>' in kwargs and len(positions) < len(axes):<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>elif '<STR_LIT>' not in kwargs and len(positions) != len(axes):<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT><DEDENT>if '<STR_LIT>' in kwargs:<EOL><INDENT>if '<STR_LIT>' in kwargs or '<STR_LIT:x>' in kwargs:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>_check_length(kwargs['<STR_LIT>'])<EOL>return '<STR_LIT>', kwargs['<STR_LIT>'], axes<EOL><DEDENT>elif '<STR_LIT>' in kwargs:<EOL><INDENT>_check_length(kwargs['<STR_LIT>'])<EOL>return '<STR_LIT:x>', kwargs['<STR_LIT>'], axes<EOL><DEDENT>elif '<STR_LIT:x>' in kwargs:<EOL><INDENT>warnings.warn('<STR_LIT>'<EOL>'<STR_LIT>', metpyDeprecation)<EOL>_check_length(kwargs['<STR_LIT:x>'])<EOL>return '<STR_LIT:x>', kwargs['<STR_LIT:x>'], axes<EOL><DEDENT>elif isinstance(f, xr.DataArray):<EOL><INDENT>return '<STR_LIT>', axes, axes <EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>
Handle common processing of arguments for gradient and gradient-like functions.
f8488:m24
def _process_deriv_args(f, kwargs):
n = f.ndim<EOL>axis = normalize_axis_index(kwargs.get('<STR_LIT>', <NUM_LIT:0>), n)<EOL>if f.shape[axis] < <NUM_LIT:3>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if '<STR_LIT>' in kwargs:<EOL><INDENT>if '<STR_LIT:x>' in kwargs:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>delta = atleast_1d(kwargs['<STR_LIT>'])<EOL>if delta.size == <NUM_LIT:1>:<EOL><INDENT>diff_size = list(f.shape)<EOL>diff_size[axis] -= <NUM_LIT:1><EOL>delta_units = getattr(delta, '<STR_LIT>', None)<EOL>delta = np.broadcast_to(delta, diff_size, subok=True)<EOL>if delta_units is not None:<EOL><INDENT>delta = delta * delta_units<EOL><DEDENT><DEDENT>else:<EOL><INDENT>delta = _broadcast_to_axis(delta, axis, n)<EOL><DEDENT><DEDENT>elif '<STR_LIT:x>' in kwargs:<EOL><INDENT>x = _broadcast_to_axis(kwargs['<STR_LIT:x>'], axis, n)<EOL>delta = diff(x, axis=axis)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>return n, axis, delta<EOL>
Handle common processing of arguments for derivative functions.
f8488:m25
@exporter.export<EOL>@preprocess_xarray<EOL>def parse_angle(input_dir):
if isinstance(input_dir, str):<EOL><INDENT>abb_dirs = [_abbrieviate_direction(input_dir)]<EOL><DEDENT>elif isinstance(input_dir, list):<EOL><INDENT>input_dir_str = '<STR_LIT:U+002C>'.join(input_dir)<EOL>abb_dir_str = _abbrieviate_direction(input_dir_str)<EOL>abb_dirs = abb_dir_str.split('<STR_LIT:U+002C>')<EOL><DEDENT>return itemgetter(*abb_dirs)(DIR_DICT)<EOL>
Calculate the meteorological angle from directional text. Works for abbrieviations or whole words (E -> 90 | South -> 180) and also is able to parse 22.5 degreee angles such as ESE/East South East Parameters ---------- input_dir : string or array-like strings Directional text such as west, [south-west, ne], etc Returns ------- angle The angle in degrees
f8488:m26
def _abbrieviate_direction(ext_dir_str):
return (ext_dir_str<EOL>.upper()<EOL>.replace('<STR_LIT:_>', '<STR_LIT>')<EOL>.replace('<STR_LIT:->', '<STR_LIT>')<EOL>.replace('<STR_LIT:U+0020>', '<STR_LIT>')<EOL>.replace('<STR_LIT>', '<STR_LIT:N>')<EOL>.replace('<STR_LIT>', '<STR_LIT:E>')<EOL>.replace('<STR_LIT>', '<STR_LIT:S>')<EOL>.replace('<STR_LIT>', '<STR_LIT>')<EOL>)<EOL>
Convert extended (non-abbrievated) directions to abbrieviation.
f8488:m27
@exporter.export<EOL>@preprocess_xarray<EOL>def get_perturbation(ts, axis=-<NUM_LIT:1>):
slices = [slice(None)] * ts.ndim<EOL>slices[axis] = None<EOL>mean = ts.mean(axis=axis)[tuple(slices)]<EOL>return ts - mean<EOL>
r"""Compute the perturbation from the mean of a time series. Parameters ---------- ts : array_like The time series from which you wish to find the perturbation time series (perturbation from the mean). Returns ------- array_like The perturbation time series. Other Parameters ---------------- axis : int The index of the time axis. Default is -1 Notes ----- The perturbation time series produced by this function is defined as the perturbations about the mean: .. math:: x(t)^{\prime} = x(t) - \overline{x(t)}
f8489:m0
@exporter.export<EOL>@preprocess_xarray<EOL>def tke(u, v, w, perturbation=False, axis=-<NUM_LIT:1>):
if not perturbation:<EOL><INDENT>u = get_perturbation(u, axis=axis)<EOL>v = get_perturbation(v, axis=axis)<EOL>w = get_perturbation(w, axis=axis)<EOL><DEDENT>u_cont = np.mean(u * u, axis=axis)<EOL>v_cont = np.mean(v * v, axis=axis)<EOL>w_cont = np.mean(w * w, axis=axis)<EOL>return <NUM_LIT:0.5> * np.sqrt(u_cont + v_cont + w_cont)<EOL>
r"""Compute turbulence kinetic energy. Compute the turbulence kinetic energy (e) from the time series of the velocity components. Parameters ---------- u : array_like The wind component along the x-axis v : array_like The wind component along the y-axis w : array_like The wind component along the z-axis perturbation : {False, True}, optional True if the `u`, `v`, and `w` components of wind speed supplied to the function are perturbation velocities. If False, perturbation velocities will be calculated by removing the mean value from each component. Returns ------- array_like The corresponding turbulence kinetic energy value Other Parameters ---------------- axis : int The index of the time axis. Default is -1 See Also -------- get_perturbation : Used to compute perturbations if `perturbation` is False. Notes ----- Turbulence Kinetic Energy is computed as: .. math:: e = 0.5 \sqrt{\overline{u^{\prime2}} + \overline{v^{\prime2}} + \overline{w^{\prime2}}}, where the velocity components .. math:: u^{\prime}, v^{\prime}, u^{\prime} are perturbation velocities. For more information on the subject, please see [Garratt1994]_.
f8489:m1
@exporter.export<EOL>@preprocess_xarray<EOL>def kinematic_flux(vel, b, perturbation=False, axis=-<NUM_LIT:1>):
kf = np.mean(vel * b, axis=axis)<EOL>if not perturbation:<EOL><INDENT>kf -= np.mean(vel, axis=axis) * np.mean(b, axis=axis)<EOL><DEDENT>return np.atleast_1d(kf)<EOL>
r"""Compute the kinematic flux from two time series. Compute the kinematic flux from the time series of two variables `vel` and b. Note that to be a kinematic flux, at least one variable must be a component of velocity. Parameters ---------- vel : array_like A component of velocity b : array_like May be a component of velocity or a scalar variable (e.g. Temperature) perturbation : bool, optional `True` if the `vel` and `b` variables are perturbations. If `False`, perturbations will be calculated by removing the mean value from each variable. Defaults to `False`. Returns ------- array_like The corresponding kinematic flux Other Parameters ---------------- axis : int, optional The index of the time axis, along which the calculations will be performed. Defaults to -1 Notes ----- A kinematic flux is computed as .. math:: \overline{u^{\prime} s^{\prime}} where at the prime notation denotes perturbation variables, and at least one variable is perturbation velocity. For example, the vertical kinematic momentum flux (two velocity components): .. math:: \overline{u^{\prime} w^{\prime}} or the vertical kinematic heat flux (one velocity component, and one scalar): .. math:: \overline{w^{\prime} T^{\prime}} If perturbation variables are passed into this function (i.e. `perturbation` is True), the kinematic flux is computed using the equation above. However, the equation above can be rewritten as .. math:: \overline{us} - \overline{u}~\overline{s} which is computationally more efficient. This is how the kinematic flux is computed in this function if `perturbation` is False. For more information on the subject, please see [Garratt1994]_.
f8489:m2
@exporter.export<EOL>@preprocess_xarray<EOL>def friction_velocity(u, w, v=None, perturbation=False, axis=-<NUM_LIT:1>):
uw = kinematic_flux(u, w, perturbation=perturbation, axis=axis)<EOL>kf = uw * uw<EOL>if v is not None:<EOL><INDENT>vw = kinematic_flux(v, w, perturbation=perturbation, axis=axis)<EOL>kf += vw * vw<EOL><DEDENT>np.sqrt(kf, out=kf)<EOL>return np.sqrt(kf)<EOL>
r"""Compute the friction velocity from the time series of velocity components. Compute the friction velocity from the time series of the x, z, and optionally y, velocity components. Parameters ---------- u : array_like The wind component along the x-axis w : array_like The wind component along the z-axis v : array_like, optional The wind component along the y-axis. perturbation : {False, True}, optional True if the `u`, `w`, and `v` components of wind speed supplied to the function are perturbation velocities. If False, perturbation velocities will be calculated by removing the mean value from each component. Returns ------- array_like The corresponding friction velocity Other Parameters ---------------- axis : int The index of the time axis. Default is -1 See Also -------- kinematic_flux : Used to compute the x-component and y-component vertical kinematic momentum flux(es) used in the computation of the friction velocity. Notes ----- The Friction Velocity is computed as: .. math:: u_{*} = \sqrt[4]{\left(\overline{u^{\prime}w^{\prime}}\right)^2 + \left(\overline{v^{\prime}w^{\prime}}\right)^2}, where :math: \overline{u^{\prime}w^{\prime}} and :math: \overline{v^{\prime}w^{\prime}} are the x-component and y-components of the vertical kinematic momentum flux, respectively. If the optional v component of velocity is not supplied to the function, the computation of the friction velocity is reduced to .. math:: u_{*} = \sqrt[4]{\left(\overline{u^{\prime}w^{\prime}}\right)^2} For more information on the subject, please see [Garratt1994]_.
f8489:m3
def _make_datetime(s):
s = bytearray(s) <EOL>year, month, day, hour, minute, second, cs = s<EOL>if year < <NUM_LIT>:<EOL><INDENT>year += <NUM_LIT:100><EOL><DEDENT>return datetime(<NUM_LIT> + year, month, day, hour, minute, second, <NUM_LIT> * cs)<EOL>
r"""Convert 7 bytes from a GINI file to a `datetime` instance.
f8494:m0
def _scaled_int(s):
s = bytearray(s) <EOL>sign = <NUM_LIT:1> - ((s[<NUM_LIT:0>] & <NUM_LIT>) >> <NUM_LIT:6>)<EOL>int_val = (((s[<NUM_LIT:0>] & <NUM_LIT>) << <NUM_LIT:16>) | (s[<NUM_LIT:1>] << <NUM_LIT:8>) | s[<NUM_LIT:2>])<EOL>log.debug('<STR_LIT>', '<STR_LIT:U+0020>'.join(hex(c) for c in s), int_val, sign)<EOL>return (sign * int_val) / <NUM_LIT><EOL>
r"""Convert a 3 byte string to a signed integer value.
f8494:m1
def _name_lookup(names):
mapper = dict(zip(range(len(names)), names))<EOL>def lookup(val):<EOL><INDENT>return mapper.get(val, '<STR_LIT>')<EOL><DEDENT>return lookup<EOL>
r"""Create an io helper to convert an integer to a named value.
f8494:m2
def _add_projection_coords(ds, prod_desc, proj_var, dx, dy):
proj = cf_to_proj(proj_var)<EOL>x0, y0 = proj(prod_desc.lo1, prod_desc.la1)<EOL>ds.createDimension('<STR_LIT:x>', prod_desc.nx)<EOL>x_var = ds.createVariable('<STR_LIT:x>', np.float64, dimensions=('<STR_LIT:x>',))<EOL>x_var.units = '<STR_LIT:m>'<EOL>x_var.long_name = '<STR_LIT>'<EOL>x_var.standard_name = '<STR_LIT>'<EOL>x_var[:] = x0 + np.arange(prod_desc.nx) * (<NUM_LIT> * dx)<EOL>ds.createDimension('<STR_LIT:y>', prod_desc.ny)<EOL>y_var = ds.createVariable('<STR_LIT:y>', np.float64, dimensions=('<STR_LIT:y>',))<EOL>y_var.units = '<STR_LIT:m>'<EOL>y_var.long_name = '<STR_LIT>'<EOL>y_var.standard_name = '<STR_LIT>'<EOL>y_var[::-<NUM_LIT:1>] = y0 + np.arange(prod_desc.ny) * (<NUM_LIT> * dy)<EOL>x, y = np.meshgrid(x_var[:], y_var[:])<EOL>lon, lat = proj(x, y, inverse=True)<EOL>lon_var = ds.createVariable('<STR_LIT>', np.float64, dimensions=('<STR_LIT:y>', '<STR_LIT:x>'), wrap_array=lon)<EOL>lon_var.long_name = '<STR_LIT>'<EOL>lon_var.units = '<STR_LIT>'<EOL>lat_var = ds.createVariable('<STR_LIT>', np.float64, dimensions=('<STR_LIT:y>', '<STR_LIT:x>'), wrap_array=lat)<EOL>lat_var.long_name = '<STR_LIT>'<EOL>lat_var.units = '<STR_LIT>'<EOL>ds.img_extent = (x_var[:].min(), x_var[:].max(), y_var[:].min(), y_var[:].max())<EOL>
Add coordinate variables (projection and lon/lat) to a dataset.
f8494:m3
def __init__(self, filename):
fobj = open_as_needed(filename)<EOL>with contextlib.closing(fobj):<EOL><INDENT>self._buffer = IOBuffer.fromfile(fobj)<EOL><DEDENT>self.wmo_code = '<STR_LIT>'<EOL>self._process_wmo_header()<EOL>log.debug('<STR_LIT>', self.wmo_code)<EOL>log.debug('<STR_LIT>', len(self._buffer))<EOL>self._buffer = IOBuffer(self._buffer.read_func(zlib_decompress_all_frames))<EOL>log.debug('<STR_LIT>', len(self._buffer))<EOL>self._process_wmo_header()<EOL>log.debug('<STR_LIT>', self.wmo_code)<EOL>start = self._buffer.set_mark()<EOL>self.prod_desc = self._buffer.read_struct(self.prod_desc_fmt)<EOL>log.debug(self.prod_desc)<EOL>self.proj_info = None<EOL>if self.prod_desc.projection in (GiniProjection.lambert_conformal,<EOL>GiniProjection.polar_stereographic):<EOL><INDENT>self.proj_info = self._buffer.read_struct(self.lc_ps_fmt)<EOL><DEDENT>elif self.prod_desc.projection == GiniProjection.mercator:<EOL><INDENT>self.proj_info = self._buffer.read_struct(self.mercator_fmt)<EOL><DEDENT>else:<EOL><INDENT>log.warning('<STR_LIT>', self.prod_desc.projection)<EOL><DEDENT>log.debug(self.proj_info)<EOL>self.prod_desc2 = self._buffer.read_struct(self.prod_desc2_fmt)<EOL>log.debug(self.prod_desc2)<EOL>if self.prod_desc2.nav_cal not in (<NUM_LIT:0>, -<NUM_LIT>): <EOL><INDENT>if self._buffer.get_next(self.nav_fmt.size) != b'<STR_LIT:\x00>' * self.nav_fmt.size:<EOL><INDENT>log.warning('<STR_LIT>', self.prod_desc2.nav_cal)<EOL><DEDENT>if self.prod_desc2.nav_cal in (<NUM_LIT:1>, <NUM_LIT:2>):<EOL><INDENT>self.navigation = self._buffer.read_struct(self.nav_fmt)<EOL>log.debug(self.navigation)<EOL><DEDENT><DEDENT>if self.prod_desc2.pdb_size == <NUM_LIT:0>:<EOL><INDENT>log.warning('<STR_LIT>')<EOL>self.prod_desc2 = self.prod_desc2._replace(pdb_size=<NUM_LIT>)<EOL><DEDENT>self._buffer.jump_to(start, self.prod_desc2.pdb_size)<EOL>blob = self._buffer.read(self.prod_desc.num_records * self.prod_desc.record_len)<EOL>end = self._buffer.read(self.prod_desc.record_len)<EOL>if end != b'<STR_LIT>'.join(repeat(b'<STR_LIT>', self.prod_desc.record_len // <NUM_LIT:2>)):<EOL><INDENT>log.warning('<STR_LIT>', end)<EOL><DEDENT>if not self._buffer.at_end():<EOL><INDENT>if not blob:<EOL><INDENT>log.debug('<STR_LIT>')<EOL>from matplotlib.image import imread<EOL>blob = (imread(BytesIO(self._buffer.read())) * <NUM_LIT:255>).astype('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>log.warning('<STR_LIT>',<EOL>self._buffer.get_next(<NUM_LIT:10>))<EOL><DEDENT><DEDENT>self.data = np.array(blob).reshape((self.prod_desc.ny,<EOL>self.prod_desc.nx))<EOL>
r"""Create an instance of `GiniFile`. Parameters ---------- filename : str or file-like object If str, the name of the file to be opened. Gzip-ed files are recognized with the extension ``'.gz'``, as are bzip2-ed files with the extension ``'.bz2'`` If `filename` is a file-like object, this will be read from directly.
f8494:c1:m0
@deprecated(<NUM_LIT>, alternative='<STR_LIT>')<EOL><INDENT>def to_dataset(self):<DEDENT>
ds = Dataset()<EOL>ds.createDimension('<STR_LIT:time>', <NUM_LIT:1>)<EOL>time_var = ds.createVariable('<STR_LIT:time>', np.int32, dimensions=('<STR_LIT:time>',))<EOL>base_time = self.prod_desc.datetime.replace(hour=<NUM_LIT:0>, minute=<NUM_LIT:0>, second=<NUM_LIT:0>, microsecond=<NUM_LIT:0>)<EOL>time_var.units = '<STR_LIT>' + base_time.isoformat()<EOL>offset = (self.prod_desc.datetime - base_time)<EOL>time_var[:] = offset.seconds * <NUM_LIT:1000> + offset.microseconds / <NUM_LIT><EOL>if self.prod_desc.projection == GiniProjection.lambert_conformal:<EOL><INDENT>proj_var = ds.createVariable('<STR_LIT>', np.int32)<EOL>proj_var.grid_mapping_name = '<STR_LIT>'<EOL>proj_var.standard_parallel = self.prod_desc2.lat_in<EOL>proj_var.longitude_of_central_meridian = self.proj_info.lov<EOL>proj_var.latitude_of_projection_origin = self.prod_desc2.lat_in<EOL>proj_var.earth_radius = <NUM_LIT><EOL>_add_projection_coords(ds, self.prod_desc, proj_var, self.proj_info.dx,<EOL>self.proj_info.dy)<EOL><DEDENT>elif self.prod_desc.projection == GiniProjection.polar_stereographic:<EOL><INDENT>proj_var = ds.createVariable('<STR_LIT>', np.int32)<EOL>proj_var.grid_mapping_name = '<STR_LIT>'<EOL>proj_var.straight_vertical_longitude_from_pole = self.proj_info.lov<EOL>proj_var.latitude_of_projection_origin = -<NUM_LIT> if self.proj_info.proj_center else <NUM_LIT><EOL>proj_var.earth_radius = <NUM_LIT><EOL>proj_var.standard_parallel = <NUM_LIT> <EOL>_add_projection_coords(ds, self.prod_desc, proj_var, self.proj_info.dx,<EOL>self.proj_info.dy)<EOL><DEDENT>elif self.prod_desc.projection == GiniProjection.mercator:<EOL><INDENT>proj_var = ds.createVariable('<STR_LIT>', np.int32)<EOL>proj_var.grid_mapping_name = '<STR_LIT>'<EOL>proj_var.longitude_of_projection_origin = self.prod_desc.lo1<EOL>proj_var.latitude_of_projection_origin = self.prod_desc.la1<EOL>proj_var.standard_parallel = self.prod_desc2.lat_in<EOL>proj_var.earth_radius = <NUM_LIT><EOL>_add_projection_coords(ds, self.prod_desc, proj_var, self.prod_desc2.resolution,<EOL>self.prod_desc2.resolution)<EOL><DEDENT>else:<EOL><INDENT>raise NotImplementedError('<STR_LIT>')<EOL><DEDENT>name = self.prod_desc.channel<EOL>if '<STR_LIT:(>' in name:<EOL><INDENT>name = name.split('<STR_LIT:(>')[<NUM_LIT:0>].rstrip()<EOL><DEDENT>data_var = ds.createVariable(name, self.data.dtype, ('<STR_LIT:y>', '<STR_LIT:x>'),<EOL>wrap_array=np.ma.array(self.data,<EOL>mask=self.data == self.missing))<EOL>data_var.long_name = self.prod_desc.channel<EOL>data_var.missing_value = self.missing<EOL>data_var.coordinates = '<STR_LIT>'<EOL>data_var.grid_mapping = proj_var.name<EOL>ds.satellite = self.prod_desc.creating_entity<EOL>ds.sector = self.prod_desc.sector_id<EOL>return ds<EOL>
Convert to a CDM dataset. Gives a representation of the data in a much more user-friendly manner, providing easy access to Variables and relevant attributes. Returns ------- Dataset .. deprecated:: 0.8.0
f8494:c1:m1
def _process_wmo_header(self):
data = self._buffer.get_next(<NUM_LIT:64>).decode('<STR_LIT:utf-8>', '<STR_LIT:ignore>')<EOL>match = self.wmo_finder.search(data)<EOL>if match:<EOL><INDENT>self.wmo_code = match.groups()[<NUM_LIT:0>]<EOL>self.siteID = match.groups()[-<NUM_LIT:1>]<EOL>self._buffer.skip(match.end())<EOL><DEDENT>
Read off the WMO header from the file, if necessary.
f8494:c1:m2
def __str__(self):
parts = [self.__class__.__name__ + '<STR_LIT>',<EOL>'<STR_LIT>', '<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>']<EOL>return '<STR_LIT>'.join(parts).format(self.prod_desc, self.prod_desc2)<EOL>
Return a string representation of the product.
f8494:c1:m3