INSTRUCTION
stringlengths
1
8.43k
RESPONSE
stringlengths
75
104k
Create an array.
def array(data: Sequence[object], dtype: Optional[Union[str, np.dtype, ExtensionDtype]] = None, copy: bool = True, ) -> ABCExtensionArray: """ Create an array. .. versionadded:: 0.24.0 Parameters ---------- data : Sequence of objects The scalars inside `data` should be instances of the scalar type for `dtype`. It's expected that `data` represents a 1-dimensional array of data. When `data` is an Index or Series, the underlying array will be extracted from `data`. dtype : str, np.dtype, or ExtensionDtype, optional The dtype to use for the array. This may be a NumPy dtype or an extension type registered with pandas using :meth:`pandas.api.extensions.register_extension_dtype`. If not specified, there are two possibilities: 1. When `data` is a :class:`Series`, :class:`Index`, or :class:`ExtensionArray`, the `dtype` will be taken from the data. 2. Otherwise, pandas will attempt to infer the `dtype` from the data. Note that when `data` is a NumPy array, ``data.dtype`` is *not* used for inferring the array type. This is because NumPy cannot represent all the types of data that can be held in extension arrays. Currently, pandas will infer an extension dtype for sequences of ============================== ===================================== Scalar Type Array Type ============================== ===================================== :class:`pandas.Interval` :class:`pandas.arrays.IntervalArray` :class:`pandas.Period` :class:`pandas.arrays.PeriodArray` :class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray` :class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray` ============================== ===================================== For all other cases, NumPy's usual inference rules will be used. copy : bool, default True Whether to copy the data, even if not necessary. Depending on the type of `data`, creating the new array may require copying data, even if ``copy=False``. Returns ------- ExtensionArray The newly created array. Raises ------ ValueError When `data` is not 1-dimensional. See Also -------- numpy.array : Construct a NumPy array. Series : Construct a pandas Series. Index : Construct a pandas Index. arrays.PandasArray : ExtensionArray wrapping a NumPy array. Series.array : Extract the array stored within a Series. Notes ----- Omitting the `dtype` argument means pandas will attempt to infer the best array type from the values in the data. As new array types are added by pandas and 3rd party libraries, the "best" array type may change. We recommend specifying `dtype` to ensure that 1. the correct array type for the data is returned 2. the returned array type doesn't change as new extension types are added by pandas and third-party libraries Additionally, if the underlying memory representation of the returned array matters, we recommend specifying the `dtype` as a concrete object rather than a string alias or allowing it to be inferred. For example, a future version of pandas or a 3rd-party library may include a dedicated ExtensionArray for string data. In this event, the following would no longer return a :class:`arrays.PandasArray` backed by a NumPy array. >>> pd.array(['a', 'b'], dtype=str) <PandasArray> ['a', 'b'] Length: 2, dtype: str32 This would instead return the new ExtensionArray dedicated for string data. If you really need the new array to be backed by a NumPy array, specify that in the dtype. >>> pd.array(['a', 'b'], dtype=np.dtype("<U1")) <PandasArray> ['a', 'b'] Length: 2, dtype: str32 Or use the dedicated constructor for the array you're expecting, and wrap that in a PandasArray >>> pd.array(np.array(['a', 'b'], dtype='<U1')) <PandasArray> ['a', 'b'] Length: 2, dtype: str32 Finally, Pandas has arrays that mostly overlap with NumPy * :class:`arrays.DatetimeArray` * :class:`arrays.TimedeltaArray` When data with a ``datetime64[ns]`` or ``timedelta64[ns]`` dtype is passed, pandas will always return a ``DatetimeArray`` or ``TimedeltaArray`` rather than a ``PandasArray``. This is for symmetry with the case of timezone-aware data, which NumPy does not natively support. >>> pd.array(['2015', '2016'], dtype='datetime64[ns]') <DatetimeArray> ['2015-01-01 00:00:00', '2016-01-01 00:00:00'] Length: 2, dtype: datetime64[ns] >>> pd.array(["1H", "2H"], dtype='timedelta64[ns]') <TimedeltaArray> ['01:00:00', '02:00:00'] Length: 2, dtype: timedelta64[ns] Examples -------- If a dtype is not specified, `data` is passed through to :meth:`numpy.array`, and a :class:`arrays.PandasArray` is returned. >>> pd.array([1, 2]) <PandasArray> [1, 2] Length: 2, dtype: int64 Or the NumPy dtype can be specified >>> pd.array([1, 2], dtype=np.dtype("int32")) <PandasArray> [1, 2] Length: 2, dtype: int32 You can use the string alias for `dtype` >>> pd.array(['a', 'b', 'a'], dtype='category') [a, b, a] Categories (2, object): [a, b] Or specify the actual dtype >>> pd.array(['a', 'b', 'a'], ... dtype=pd.CategoricalDtype(['a', 'b', 'c'], ordered=True)) [a, b, a] Categories (3, object): [a < b < c] Because omitting the `dtype` passes the data through to NumPy, a mixture of valid integers and NA will return a floating-point NumPy array. >>> pd.array([1, 2, np.nan]) <PandasArray> [1.0, 2.0, nan] Length: 3, dtype: float64 To use pandas' nullable :class:`pandas.arrays.IntegerArray`, specify the dtype: >>> pd.array([1, 2, np.nan], dtype='Int64') <IntegerArray> [1, 2, NaN] Length: 3, dtype: Int64 Pandas will infer an ExtensionArray for some types of data: >>> pd.array([pd.Period('2000', freq="D"), pd.Period("2000", freq="D")]) <PeriodArray> ['2000-01-01', '2000-01-01'] Length: 2, dtype: period[D] `data` must be 1-dimensional. A ValueError is raised when the input has the wrong dimensionality. >>> pd.array(1) Traceback (most recent call last): ... ValueError: Cannot pass scalar '1' to 'pandas.array'. """ from pandas.core.arrays import ( period_array, ExtensionArray, IntervalArray, PandasArray, DatetimeArray, TimedeltaArray, ) from pandas.core.internals.arrays import extract_array if lib.is_scalar(data): msg = ( "Cannot pass scalar '{}' to 'pandas.array'." ) raise ValueError(msg.format(data)) data = extract_array(data, extract_numpy=True) if dtype is None and isinstance(data, ExtensionArray): dtype = data.dtype # this returns None for not-found dtypes. if isinstance(dtype, str): dtype = registry.find(dtype) or dtype if is_extension_array_dtype(dtype): cls = dtype.construct_array_type() return cls._from_sequence(data, dtype=dtype, copy=copy) if dtype is None: inferred_dtype = lib.infer_dtype(data, skipna=False) if inferred_dtype == 'period': try: return period_array(data, copy=copy) except tslibs.IncompatibleFrequency: # We may have a mixture of frequencies. # We choose to return an ndarray, rather than raising. pass elif inferred_dtype == 'interval': try: return IntervalArray(data, copy=copy) except ValueError: # We may have a mixture of `closed` here. # We choose to return an ndarray, rather than raising. pass elif inferred_dtype.startswith('datetime'): # datetime, datetime64 try: return DatetimeArray._from_sequence(data, copy=copy) except ValueError: # Mixture of timezones, fall back to PandasArray pass elif inferred_dtype.startswith('timedelta'): # timedelta, timedelta64 return TimedeltaArray._from_sequence(data, copy=copy) # TODO(BooleanArray): handle this type # Pandas overrides NumPy for # 1. datetime64[ns] # 2. timedelta64[ns] # so that a DatetimeArray is returned. if is_datetime64_ns_dtype(dtype): return DatetimeArray._from_sequence(data, dtype=dtype, copy=copy) elif is_timedelta64_ns_dtype(dtype): return TimedeltaArray._from_sequence(data, dtype=dtype, copy=copy) result = PandasArray._from_sequence(data, dtype=dtype, copy=copy) return result
Try to do platform conversion with special casing for IntervalArray. Wrapper around maybe_convert_platform that alters the default return dtype in certain cases to be compatible with IntervalArray. For example empty lists return with integer dtype instead of object dtype which is prohibited for IntervalArray.
def maybe_convert_platform_interval(values): """ Try to do platform conversion, with special casing for IntervalArray. Wrapper around maybe_convert_platform that alters the default return dtype in certain cases to be compatible with IntervalArray. For example, empty lists return with integer dtype instead of object dtype, which is prohibited for IntervalArray. Parameters ---------- values : array-like Returns ------- array """ if isinstance(values, (list, tuple)) and len(values) == 0: # GH 19016 # empty lists/tuples get object dtype by default, but this is not # prohibited for IntervalArray, so coerce to integer instead return np.array([], dtype=np.int64) elif is_categorical_dtype(values): values = np.asarray(values) return maybe_convert_platform(values)
Check if the object is a file - like object.
def is_file_like(obj): """ Check if the object is a file-like object. For objects to be considered file-like, they must be an iterator AND have either a `read` and/or `write` method as an attribute. Note: file-like objects must be iterable, but iterable objects need not be file-like. .. versionadded:: 0.20.0 Parameters ---------- obj : The object to check Returns ------- is_file_like : bool Whether `obj` has file-like properties. Examples -------- >>> buffer(StringIO("data")) >>> is_file_like(buffer) True >>> is_file_like([1, 2, 3]) False """ if not (hasattr(obj, 'read') or hasattr(obj, 'write')): return False if not hasattr(obj, "__iter__"): return False return True
Check if the object is list - like.
def is_list_like(obj, allow_sets=True): """ Check if the object is list-like. Objects that are considered list-like are for example Python lists, tuples, sets, NumPy arrays, and Pandas Series. Strings and datetime objects, however, are not considered list-like. Parameters ---------- obj : The object to check allow_sets : boolean, default True If this parameter is False, sets will not be considered list-like .. versionadded:: 0.24.0 Returns ------- is_list_like : bool Whether `obj` has list-like properties. Examples -------- >>> is_list_like([1, 2, 3]) True >>> is_list_like({1, 2, 3}) True >>> is_list_like(datetime(2017, 1, 1)) False >>> is_list_like("foo") False >>> is_list_like(1) False >>> is_list_like(np.array([2])) True >>> is_list_like(np.array(2))) False """ return (isinstance(obj, abc.Iterable) and # we do not count strings/unicode/bytes as list-like not isinstance(obj, (str, bytes)) and # exclude zero-dimensional numpy arrays, effectively scalars not (isinstance(obj, np.ndarray) and obj.ndim == 0) and # exclude sets if allow_sets is False not (allow_sets is False and isinstance(obj, abc.Set)))
Check if the object is list - like and that all of its elements are also list - like.
def is_nested_list_like(obj): """ Check if the object is list-like, and that all of its elements are also list-like. .. versionadded:: 0.20.0 Parameters ---------- obj : The object to check Returns ------- is_list_like : bool Whether `obj` has list-like properties. Examples -------- >>> is_nested_list_like([[1, 2, 3]]) True >>> is_nested_list_like([{1, 2, 3}, {1, 2, 3}]) True >>> is_nested_list_like(["foo"]) False >>> is_nested_list_like([]) False >>> is_nested_list_like([[1, 2, 3], 1]) False Notes ----- This won't reliably detect whether a consumable iterator (e. g. a generator) is a nested-list-like without consuming the iterator. To avoid consuming it, we always return False if the outer container doesn't define `__len__`. See Also -------- is_list_like """ return (is_list_like(obj) and hasattr(obj, '__len__') and len(obj) > 0 and all(is_list_like(item) for item in obj))
Check if the object is dict - like.
def is_dict_like(obj): """ Check if the object is dict-like. Parameters ---------- obj : The object to check Returns ------- is_dict_like : bool Whether `obj` has dict-like properties. Examples -------- >>> is_dict_like({1: 2}) True >>> is_dict_like([1, 2, 3]) False >>> is_dict_like(dict) False >>> is_dict_like(dict()) True """ dict_like_attrs = ("__getitem__", "keys", "__contains__") return (all(hasattr(obj, attr) for attr in dict_like_attrs) # [GH 25196] exclude classes and not isinstance(obj, type))
Check if the object is a sequence of objects. String types are not included as sequences here.
def is_sequence(obj): """ Check if the object is a sequence of objects. String types are not included as sequences here. Parameters ---------- obj : The object to check Returns ------- is_sequence : bool Whether `obj` is a sequence of objects. Examples -------- >>> l = [1, 2, 3] >>> >>> is_sequence(l) True >>> is_sequence(iter(l)) False """ try: iter(obj) # Can iterate over it. len(obj) # Has a length associated with it. return not isinstance(obj, (str, bytes)) except (TypeError, AttributeError): return False
This is called upon unpickling rather than the default which doesn t have arguments and breaks __new__
def _new_DatetimeIndex(cls, d): """ This is called upon unpickling, rather than the default which doesn't have arguments and breaks __new__ """ if "data" in d and not isinstance(d["data"], DatetimeIndex): # Avoid need to verify integrity by calling simple_new directly data = d.pop("data") result = cls._simple_new(data, **d) else: with warnings.catch_warnings(): # we ignore warnings from passing verify_integrity=False # TODO: If we knew what was going in to **d, we might be able to # go through _simple_new instead warnings.simplefilter("ignore") result = cls.__new__(cls, verify_integrity=False, **d) return result
Return a fixed frequency DatetimeIndex.
def date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=None, **kwargs): """ Return a fixed frequency DatetimeIndex. Parameters ---------- start : str or datetime-like, optional Left bound for generating dates. end : str or datetime-like, optional Right bound for generating dates. periods : integer, optional Number of periods to generate. freq : str or DateOffset, default 'D' Frequency strings can have multiples, e.g. '5H'. See :ref:`here <timeseries.offset_aliases>` for a list of frequency aliases. tz : str or tzinfo, optional Time zone name for returning localized DatetimeIndex, for example 'Asia/Hong_Kong'. By default, the resulting DatetimeIndex is timezone-naive. normalize : bool, default False Normalize start/end dates to midnight before generating date range. name : str, default None Name of the resulting DatetimeIndex. closed : {None, 'left', 'right'}, optional Make the interval closed with respect to the given frequency to the 'left', 'right', or both sides (None, the default). **kwargs For compatibility. Has no effect on the result. Returns ------- rng : DatetimeIndex See Also -------- DatetimeIndex : An immutable container for datetimes. timedelta_range : Return a fixed frequency TimedeltaIndex. period_range : Return a fixed frequency PeriodIndex. interval_range : Return a fixed frequency IntervalIndex. Notes ----- Of the four parameters ``start``, ``end``, ``periods``, and ``freq``, exactly three must be specified. If ``freq`` is omitted, the resulting ``DatetimeIndex`` will have ``periods`` linearly spaced elements between ``start`` and ``end`` (closed on both sides). To learn more about the frequency strings, please see `this link <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__. Examples -------- **Specifying the values** The next four examples generate the same `DatetimeIndex`, but vary the combination of `start`, `end` and `periods`. Specify `start` and `end`, with the default daily frequency. >>> pd.date_range(start='1/1/2018', end='1/08/2018') DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') Specify `start` and `periods`, the number of periods (days). >>> pd.date_range(start='1/1/2018', periods=8) DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') Specify `end` and `periods`, the number of periods (days). >>> pd.date_range(end='1/1/2018', periods=8) DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28', '2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'], dtype='datetime64[ns]', freq='D') Specify `start`, `end`, and `periods`; the frequency is generated automatically (linearly spaced). >>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3) DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00', '2018-04-27 00:00:00'], dtype='datetime64[ns]', freq=None) **Other Parameters** Changed the `freq` (frequency) to ``'M'`` (month end frequency). >>> pd.date_range(start='1/1/2018', periods=5, freq='M') DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30', '2018-05-31'], dtype='datetime64[ns]', freq='M') Multiples are allowed >>> pd.date_range(start='1/1/2018', periods=5, freq='3M') DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31', '2019-01-31'], dtype='datetime64[ns]', freq='3M') `freq` can also be specified as an Offset object. >>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3)) DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31', '2019-01-31'], dtype='datetime64[ns]', freq='3M') Specify `tz` to set the timezone. >>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo') DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00', '2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00', '2018-01-05 00:00:00+09:00'], dtype='datetime64[ns, Asia/Tokyo]', freq='D') `closed` controls whether to include `start` and `end` that are on the boundary. The default includes boundary points on either end. >>> pd.date_range(start='2017-01-01', end='2017-01-04', closed=None) DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'], dtype='datetime64[ns]', freq='D') Use ``closed='left'`` to exclude `end` if it falls on the boundary. >>> pd.date_range(start='2017-01-01', end='2017-01-04', closed='left') DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'], dtype='datetime64[ns]', freq='D') Use ``closed='right'`` to exclude `start` if it falls on the boundary. >>> pd.date_range(start='2017-01-01', end='2017-01-04', closed='right') DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'], dtype='datetime64[ns]', freq='D') """ if freq is None and com._any_none(periods, start, end): freq = 'D' dtarr = DatetimeArray._generate_range( start=start, end=end, periods=periods, freq=freq, tz=tz, normalize=normalize, closed=closed, **kwargs) return DatetimeIndex._simple_new( dtarr, tz=dtarr.tz, freq=dtarr.freq, name=name)
Return a fixed frequency DatetimeIndex with business day as the default frequency
def bdate_range(start=None, end=None, periods=None, freq='B', tz=None, normalize=True, name=None, weekmask=None, holidays=None, closed=None, **kwargs): """ Return a fixed frequency DatetimeIndex, with business day as the default frequency Parameters ---------- start : string or datetime-like, default None Left bound for generating dates. end : string or datetime-like, default None Right bound for generating dates. periods : integer, default None Number of periods to generate. freq : string or DateOffset, default 'B' (business daily) Frequency strings can have multiples, e.g. '5H'. tz : string or None Time zone name for returning localized DatetimeIndex, for example Asia/Beijing. normalize : bool, default False Normalize start/end dates to midnight before generating date range. name : string, default None Name of the resulting DatetimeIndex. weekmask : string or None, default None Weekmask of valid business days, passed to ``numpy.busdaycalendar``, only used when custom frequency strings are passed. The default value None is equivalent to 'Mon Tue Wed Thu Fri'. .. versionadded:: 0.21.0 holidays : list-like or None, default None Dates to exclude from the set of valid business days, passed to ``numpy.busdaycalendar``, only used when custom frequency strings are passed. .. versionadded:: 0.21.0 closed : string, default None Make the interval closed with respect to the given frequency to the 'left', 'right', or both sides (None). **kwargs For compatibility. Has no effect on the result. Returns ------- DatetimeIndex Notes ----- Of the four parameters: ``start``, ``end``, ``periods``, and ``freq``, exactly three must be specified. Specifying ``freq`` is a requirement for ``bdate_range``. Use ``date_range`` if specifying ``freq`` is not desired. To learn more about the frequency strings, please see `this link <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__. Examples -------- Note how the two weekend days are skipped in the result. >>> pd.bdate_range(start='1/1/2018', end='1/08/2018') DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08'], dtype='datetime64[ns]', freq='B') """ if freq is None: msg = 'freq must be specified for bdate_range; use date_range instead' raise TypeError(msg) if is_string_like(freq) and freq.startswith('C'): try: weekmask = weekmask or 'Mon Tue Wed Thu Fri' freq = prefix_mapping[freq](holidays=holidays, weekmask=weekmask) except (KeyError, TypeError): msg = 'invalid custom frequency string: {freq}'.format(freq=freq) raise ValueError(msg) elif holidays or weekmask: msg = ('a custom frequency string is required when holidays or ' 'weekmask are passed, got frequency {freq}').format(freq=freq) raise ValueError(msg) return date_range(start=start, end=end, periods=periods, freq=freq, tz=tz, normalize=normalize, name=name, closed=closed, **kwargs)
Return a fixed frequency DatetimeIndex with CustomBusinessDay as the default frequency
def cdate_range(start=None, end=None, periods=None, freq='C', tz=None, normalize=True, name=None, closed=None, **kwargs): """ Return a fixed frequency DatetimeIndex, with CustomBusinessDay as the default frequency .. deprecated:: 0.21.0 Parameters ---------- start : string or datetime-like, default None Left bound for generating dates end : string or datetime-like, default None Right bound for generating dates periods : integer, default None Number of periods to generate freq : string or DateOffset, default 'C' (CustomBusinessDay) Frequency strings can have multiples, e.g. '5H' tz : string, default None Time zone name for returning localized DatetimeIndex, for example Asia/Beijing normalize : bool, default False Normalize start/end dates to midnight before generating date range name : string, default None Name of the resulting DatetimeIndex weekmask : string, Default 'Mon Tue Wed Thu Fri' weekmask of valid business days, passed to ``numpy.busdaycalendar`` holidays : list list/array of dates to exclude from the set of valid business days, passed to ``numpy.busdaycalendar`` closed : string, default None Make the interval closed with respect to the given frequency to the 'left', 'right', or both sides (None) Notes ----- Of the three parameters: ``start``, ``end``, and ``periods``, exactly two must be specified. To learn more about the frequency strings, please see `this link <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__. Returns ------- rng : DatetimeIndex """ warnings.warn("cdate_range is deprecated and will be removed in a future " "version, instead use pd.bdate_range(..., freq='{freq}')" .format(freq=freq), FutureWarning, stacklevel=2) if freq == 'C': holidays = kwargs.pop('holidays', []) weekmask = kwargs.pop('weekmask', 'Mon Tue Wed Thu Fri') freq = CDay(holidays=holidays, weekmask=weekmask) return date_range(start=start, end=end, periods=periods, freq=freq, tz=tz, normalize=normalize, name=name, closed=closed, **kwargs)
Split data into blocks & return conformed data.
def _create_blocks(self): """ Split data into blocks & return conformed data. """ obj, index = self._convert_freq() if index is not None: index = self._on # filter out the on from the object if self.on is not None: if obj.ndim == 2: obj = obj.reindex(columns=obj.columns.difference([self.on]), copy=False) blocks = obj._to_dict_of_blocks(copy=False).values() return blocks, obj, index
Sub - classes to define. Return a sliced object.
def _gotitem(self, key, ndim, subset=None): """ Sub-classes to define. Return a sliced object. Parameters ---------- key : str / list of selections ndim : 1,2 requested ndim of result subset : object, default None subset to act on """ # create a new object to prevent aliasing if subset is None: subset = self.obj self = self._shallow_copy(subset) self._reset_cache() if subset.ndim == 2: if is_scalar(key) and key in subset or is_list_like(key): self._selection = key return self
Return index as ndarrays.
def _get_index(self, index=None): """ Return index as ndarrays. Returns ------- tuple of (index, index_as_ndarray) """ if self.is_freq_type: if index is None: index = self._on return index, index.asi8 return index, index
Wrap a single result.
def _wrap_result(self, result, block=None, obj=None): """ Wrap a single result. """ if obj is None: obj = self._selected_obj index = obj.index if isinstance(result, np.ndarray): # coerce if necessary if block is not None: if is_timedelta64_dtype(block.values.dtype): from pandas import to_timedelta result = to_timedelta( result.ravel(), unit='ns').values.reshape(result.shape) if result.ndim == 1: from pandas import Series return Series(result, index, name=obj.name) return type(obj)(result, index=index, columns=block.columns) return result
Wrap the results.
def _wrap_results(self, results, blocks, obj): """ Wrap the results. Parameters ---------- results : list of ndarrays blocks : list of blocks obj : conformed data (may be resampled) """ from pandas import Series, concat from pandas.core.index import ensure_index final = [] for result, block in zip(results, blocks): result = self._wrap_result(result, block=block, obj=obj) if result.ndim == 1: return result final.append(result) # if we have an 'on' column # we want to put it back into the results # in the same location columns = self._selected_obj.columns if self.on is not None and not self._on.equals(obj.index): name = self._on.name final.append(Series(self._on, index=obj.index, name=name)) if self._selection is not None: selection = ensure_index(self._selection) # need to reorder to include original location of # the on column (if its not already there) if name not in selection: columns = self.obj.columns indexer = columns.get_indexer(selection.tolist() + [name]) columns = columns.take(sorted(indexer)) if not len(final): return obj.astype('float64') return concat(final, axis=1).reindex(columns=columns, copy=False)
Center the result in the window.
def _center_window(self, result, window): """ Center the result in the window. """ if self.axis > result.ndim - 1: raise ValueError("Requested axis is larger then no. of argument " "dimensions") offset = _offset(window, True) if offset > 0: if isinstance(result, (ABCSeries, ABCDataFrame)): result = result.slice_shift(-offset, axis=self.axis) else: lead_indexer = [slice(None)] * result.ndim lead_indexer[self.axis] = slice(offset, None) result = np.copy(result[tuple(lead_indexer)]) return result
Provide validation for our window type return the window we have already been validated.
def _prep_window(self, **kwargs): """ Provide validation for our window type, return the window we have already been validated. """ window = self._get_window() if isinstance(window, (list, tuple, np.ndarray)): return com.asarray_tuplesafe(window).astype(float) elif is_integer(window): import scipy.signal as sig # the below may pop from kwargs def _validate_win_type(win_type, kwargs): arg_map = {'kaiser': ['beta'], 'gaussian': ['std'], 'general_gaussian': ['power', 'width'], 'slepian': ['width']} if win_type in arg_map: return tuple([win_type] + _pop_args(win_type, arg_map[win_type], kwargs)) return win_type def _pop_args(win_type, arg_names, kwargs): msg = '%s window requires %%s' % win_type all_args = [] for n in arg_names: if n not in kwargs: raise ValueError(msg % n) all_args.append(kwargs.pop(n)) return all_args win_type = _validate_win_type(self.win_type, kwargs) # GH #15662. `False` makes symmetric window, rather than periodic. return sig.get_window(win_type, window, False).astype(float)
Applies a moving window of type window_type on the data.
def _apply_window(self, mean=True, **kwargs): """ Applies a moving window of type ``window_type`` on the data. Parameters ---------- mean : bool, default True If True computes weighted mean, else weighted sum Returns ------- y : same type as input argument """ window = self._prep_window(**kwargs) center = self.center blocks, obj, index = self._create_blocks() results = [] for b in blocks: try: values = self._prep_values(b.values) except TypeError: results.append(b.values.copy()) continue if values.size == 0: results.append(values.copy()) continue offset = _offset(window, center) additional_nans = np.array([np.NaN] * offset) def f(arg, *args, **kwargs): minp = _use_window(self.min_periods, len(window)) return libwindow.roll_window(np.concatenate((arg, additional_nans)) if center else arg, window, minp, avg=mean) result = np.apply_along_axis(f, self.axis, values) if center: result = self._center_window(result, window) results.append(result) return self._wrap_results(results, blocks, obj)
Dispatch to apply ; we are stripping all of the _apply kwargs and performing the original function call on the grouped object.
def _apply(self, func, name, window=None, center=None, check_minp=None, **kwargs): """ Dispatch to apply; we are stripping all of the _apply kwargs and performing the original function call on the grouped object. """ def f(x, name=name, *args): x = self._shallow_copy(x) if isinstance(name, str): return getattr(x, name)(*args, **kwargs) return x.apply(name, *args, **kwargs) return self._groupby.apply(f)
Rolling statistical measure using supplied function.
def _apply(self, func, name=None, window=None, center=None, check_minp=None, **kwargs): """ Rolling statistical measure using supplied function. Designed to be used with passed-in Cython array-based functions. Parameters ---------- func : str/callable to apply name : str, optional name of this function window : int/array, default to _get_window() center : bool, default to self.center check_minp : function, default to _use_window Returns ------- y : type of input """ if center is None: center = self.center if window is None: window = self._get_window() if check_minp is None: check_minp = _use_window blocks, obj, index = self._create_blocks() index, indexi = self._get_index(index=index) results = [] for b in blocks: values = self._prep_values(b.values) if values.size == 0: results.append(values.copy()) continue # if we have a string function name, wrap it if isinstance(func, str): cfunc = getattr(libwindow, func, None) if cfunc is None: raise ValueError("we do not support this function " "in libwindow.{func}".format(func=func)) def func(arg, window, min_periods=None, closed=None): minp = check_minp(min_periods, window) # ensure we are only rolling on floats arg = ensure_float64(arg) return cfunc(arg, window, minp, indexi, closed, **kwargs) # calculation function if center: offset = _offset(window, center) additional_nans = np.array([np.NaN] * offset) def calc(x): return func(np.concatenate((x, additional_nans)), window, min_periods=self.min_periods, closed=self.closed) else: def calc(x): return func(x, window, min_periods=self.min_periods, closed=self.closed) with np.errstate(all='ignore'): if values.ndim > 1: result = np.apply_along_axis(calc, self.axis, values) else: result = calc(values) if center: result = self._center_window(result, window) results.append(result) return self._wrap_results(results, blocks, obj)
Validate on is_monotonic.
def _validate_monotonic(self): """ Validate on is_monotonic. """ if not self._on.is_monotonic: formatted = self.on or 'index' raise ValueError("{0} must be " "monotonic".format(formatted))
Validate & return window frequency.
def _validate_freq(self): """ Validate & return window frequency. """ from pandas.tseries.frequencies import to_offset try: return to_offset(self.window) except (TypeError, ValueError): raise ValueError("passed window {0} is not " "compatible with a datetimelike " "index".format(self.window))
Get the window length over which to perform some operation.
def _get_window(self, other=None): """ Get the window length over which to perform some operation. Parameters ---------- other : object, default None The other object that is involved in the operation. Such an object is involved for operations like covariance. Returns ------- window : int The window length. """ axis = self.obj._get_axis(self.axis) length = len(axis) + (other is not None) * len(axis) other = self.min_periods or -1 return max(length, other)
Rolling statistical measure using supplied function. Designed to be used with passed - in Cython array - based functions.
def _apply(self, func, **kwargs): """ Rolling statistical measure using supplied function. Designed to be used with passed-in Cython array-based functions. Parameters ---------- func : str/callable to apply Returns ------- y : same type as input argument """ blocks, obj, index = self._create_blocks() results = [] for b in blocks: try: values = self._prep_values(b.values) except TypeError: results.append(b.values.copy()) continue if values.size == 0: results.append(values.copy()) continue # if we have a string function name, wrap it if isinstance(func, str): cfunc = getattr(libwindow, func, None) if cfunc is None: raise ValueError("we do not support this function " "in libwindow.{func}".format(func=func)) def func(arg): return cfunc(arg, self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods)) results.append(np.apply_along_axis(func, self.axis, values)) return self._wrap_results(results, blocks, obj)
Exponential weighted moving average.
def mean(self, *args, **kwargs): """ Exponential weighted moving average. Parameters ---------- *args, **kwargs Arguments and keyword arguments to be passed into func. """ nv.validate_window_func('mean', args, kwargs) return self._apply('ewma', **kwargs)
Exponential weighted moving stddev.
def std(self, bias=False, *args, **kwargs): """ Exponential weighted moving stddev. """ nv.validate_window_func('std', args, kwargs) return _zsqrt(self.var(bias=bias, **kwargs))
Exponential weighted moving variance.
def var(self, bias=False, *args, **kwargs): """ Exponential weighted moving variance. """ nv.validate_window_func('var', args, kwargs) def f(arg): return libwindow.ewmcov(arg, arg, self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods), int(bias)) return self._apply(f, **kwargs)
Exponential weighted sample covariance.
def cov(self, other=None, pairwise=None, bias=False, **kwargs): """ Exponential weighted sample covariance. """ if other is None: other = self._selected_obj # only default unset pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) def _get_cov(X, Y): X = self._shallow_copy(X) Y = self._shallow_copy(Y) cov = libwindow.ewmcov(X._prep_values(), Y._prep_values(), self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods), int(bias)) return X._wrap_result(cov) return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_cov, pairwise=bool(pairwise))
Exponential weighted sample correlation.
def corr(self, other=None, pairwise=None, **kwargs): """ Exponential weighted sample correlation. """ if other is None: other = self._selected_obj # only default unset pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) def _get_corr(X, Y): X = self._shallow_copy(X) Y = self._shallow_copy(Y) def _cov(x, y): return libwindow.ewmcov(x, y, self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods), 1) x_values = X._prep_values() y_values = Y._prep_values() with np.errstate(all='ignore'): cov = _cov(x_values, y_values) x_var = _cov(x_values, x_values) y_var = _cov(y_values, y_values) corr = cov / _zsqrt(x_var * y_var) return X._wrap_result(corr) return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_corr, pairwise=bool(pairwise))
Makes sure that time and panels are conformable.
def _ensure_like_indices(time, panels): """ Makes sure that time and panels are conformable. """ n_time = len(time) n_panel = len(panels) u_panels = np.unique(panels) # this sorts! u_time = np.unique(time) if len(u_time) == n_time: time = np.tile(u_time, len(u_panels)) if len(u_panels) == n_panel: panels = np.repeat(u_panels, len(u_time)) return time, panels
Returns a multi - index suitable for a panel - like DataFrame.
def panel_index(time, panels, names=None): """ Returns a multi-index suitable for a panel-like DataFrame. Parameters ---------- time : array-like Time index, does not have to repeat panels : array-like Panel index, does not have to repeat names : list, optional List containing the names of the indices Returns ------- multi_index : MultiIndex Time index is the first level, the panels are the second level. Examples -------- >>> years = range(1960,1963) >>> panels = ['A', 'B', 'C'] >>> panel_idx = panel_index(years, panels) >>> panel_idx MultiIndex([(1960, 'A'), (1961, 'A'), (1962, 'A'), (1960, 'B'), (1961, 'B'), (1962, 'B'), (1960, 'C'), (1961, 'C'), (1962, 'C')], dtype=object) or >>> years = np.repeat(range(1960,1963), 3) >>> panels = np.tile(['A', 'B', 'C'], 3) >>> panel_idx = panel_index(years, panels) >>> panel_idx MultiIndex([(1960, 'A'), (1960, 'B'), (1960, 'C'), (1961, 'A'), (1961, 'B'), (1961, 'C'), (1962, 'A'), (1962, 'B'), (1962, 'C')], dtype=object) """ if names is None: names = ['time', 'panel'] time, panels = _ensure_like_indices(time, panels) return MultiIndex.from_arrays([time, panels], sortorder=None, names=names)
Generate ND initialization ; axes are passed as required objects to __init__.
def _init_data(self, data, copy, dtype, **kwargs): """ Generate ND initialization; axes are passed as required objects to __init__. """ if data is None: data = {} if dtype is not None: dtype = self._validate_dtype(dtype) passed_axes = [kwargs.pop(a, None) for a in self._AXIS_ORDERS] if kwargs: raise TypeError('_init_data() got an unexpected keyword ' 'argument "{0}"'.format(list(kwargs.keys())[0])) axes = None if isinstance(data, BlockManager): if com._any_not_none(*passed_axes): axes = [x if x is not None else y for x, y in zip(passed_axes, data.axes)] mgr = data elif isinstance(data, dict): mgr = self._init_dict(data, passed_axes, dtype=dtype) copy = False dtype = None elif isinstance(data, (np.ndarray, list)): mgr = self._init_matrix(data, passed_axes, dtype=dtype, copy=copy) copy = False dtype = None elif is_scalar(data) and com._all_not_none(*passed_axes): values = cast_scalar_to_array([len(x) for x in passed_axes], data, dtype=dtype) mgr = self._init_matrix(values, passed_axes, dtype=values.dtype, copy=False) copy = False else: # pragma: no cover raise ValueError('Panel constructor not properly called!') NDFrame.__init__(self, mgr, axes=axes, copy=copy, dtype=dtype)
Construct Panel from dict of DataFrame objects.
def from_dict(cls, data, intersect=False, orient='items', dtype=None): """ Construct Panel from dict of DataFrame objects. Parameters ---------- data : dict {field : DataFrame} intersect : boolean Intersect indexes of input DataFrames orient : {'items', 'minor'}, default 'items' The "orientation" of the data. If the keys of the passed dict should be the items of the result panel, pass 'items' (default). Otherwise if the columns of the values of the passed DataFrame objects should be the items (which in the case of mixed-dtype data you should do), instead pass 'minor' dtype : dtype, default None Data type to force, otherwise infer Returns ------- Panel """ from collections import defaultdict orient = orient.lower() if orient == 'minor': new_data = defaultdict(OrderedDict) for col, df in data.items(): for item, s in df.items(): new_data[item][col] = s data = new_data elif orient != 'items': # pragma: no cover raise ValueError('Orientation must be one of {items, minor}.') d = cls._homogenize_dict(cls, data, intersect=intersect, dtype=dtype) ks = list(d['data'].keys()) if not isinstance(d['data'], OrderedDict): ks = list(sorted(ks)) d[cls._info_axis_name] = Index(ks) return cls(**d)
Get my plane axes indexes: these are already ( as compared with higher level planes ) as we are returning a DataFrame axes indexes.
def _get_plane_axes_index(self, axis): """ Get my plane axes indexes: these are already (as compared with higher level planes), as we are returning a DataFrame axes indexes. """ axis_name = self._get_axis_name(axis) if axis_name == 'major_axis': index = 'minor_axis' columns = 'items' if axis_name == 'minor_axis': index = 'major_axis' columns = 'items' elif axis_name == 'items': index = 'major_axis' columns = 'minor_axis' return index, columns
Get my plane axes indexes: these are already ( as compared with higher level planes ) as we are returning a DataFrame axes.
def _get_plane_axes(self, axis): """ Get my plane axes indexes: these are already (as compared with higher level planes), as we are returning a DataFrame axes. """ return [self._get_axis(axi) for axi in self._get_plane_axes_index(axis)]
Write each DataFrame in Panel to a separate excel sheet.
def to_excel(self, path, na_rep='', engine=None, **kwargs): """ Write each DataFrame in Panel to a separate excel sheet. Parameters ---------- path : string or ExcelWriter object File path or existing ExcelWriter na_rep : string, default '' Missing data representation engine : string, default None write engine to use - you can also set this via the options ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and ``io.excel.xlsm.writer``. Other Parameters ---------------- float_format : string, default None Format string for floating point numbers cols : sequence, optional Columns to write header : boolean or list of string, default True Write out column names. If a list of string is given it is assumed to be aliases for the column names index : boolean, default True Write row names (index) index_label : string or sequence, default None Column label for index column(s) if desired. If None is given, and `header` and `index` are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. startrow : upper left cell row to dump data frame startcol : upper left cell column to dump data frame Notes ----- Keyword arguments (and na_rep) are passed to the ``to_excel`` method for each DataFrame written. """ from pandas.io.excel import ExcelWriter if isinstance(path, str): writer = ExcelWriter(path, engine=engine) else: writer = path kwargs['na_rep'] = na_rep for item, df in self.iteritems(): name = str(item) df.to_excel(writer, name, **kwargs) writer.save()
Quickly retrieve single value at ( item major minor ) location.
def get_value(self, *args, **kwargs): """ Quickly retrieve single value at (item, major, minor) location. .. deprecated:: 0.21.0 Please use .at[] or .iat[] accessors. Parameters ---------- item : item label (panel item) major : major axis label (panel item row) minor : minor axis label (panel item column) takeable : interpret the passed labels as indexers, default False Returns ------- value : scalar value """ warnings.warn("get_value is deprecated and will be removed " "in a future release. Please use " ".at[] or .iat[] accessors instead", FutureWarning, stacklevel=2) return self._get_value(*args, **kwargs)
Quickly set single value at ( item major minor ) location.
def set_value(self, *args, **kwargs): """ Quickly set single value at (item, major, minor) location. .. deprecated:: 0.21.0 Please use .at[] or .iat[] accessors. Parameters ---------- item : item label (panel item) major : major axis label (panel item row) minor : minor axis label (panel item column) value : scalar takeable : interpret the passed labels as indexers, default False Returns ------- panel : Panel If label combo is contained, will be reference to calling Panel, otherwise a new object. """ warnings.warn("set_value is deprecated and will be removed " "in a future release. Please use " ".at[] or .iat[] accessors instead", FutureWarning, stacklevel=2) return self._set_value(*args, **kwargs)
Unpickle the panel.
def _unpickle_panel_compat(self, state): # pragma: no cover """ Unpickle the panel. """ from pandas.io.pickle import _unpickle_array _unpickle = _unpickle_array vals, items, major, minor = state items = _unpickle(items) major = _unpickle(major) minor = _unpickle(minor) values = _unpickle(vals) wp = Panel(values, items, major, minor) self._data = wp._data
Conform input DataFrame to align with chosen axis pair.
def conform(self, frame, axis='items'): """ Conform input DataFrame to align with chosen axis pair. Parameters ---------- frame : DataFrame axis : {'items', 'major', 'minor'} Axis the input corresponds to. E.g., if axis='major', then the frame's columns would be items, and the index would be values of the minor axis Returns ------- DataFrame """ axes = self._get_plane_axes(axis) return frame.reindex(**self._extract_axes_for_slice(self, axes))
Round each value in Panel to a specified number of decimal places.
def round(self, decimals=0, *args, **kwargs): """ Round each value in Panel to a specified number of decimal places. .. versionadded:: 0.18.0 Parameters ---------- decimals : int Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point. Returns ------- Panel object See Also -------- numpy.around """ nv.validate_round(args, kwargs) if is_integer(decimals): result = np.apply_along_axis(np.round, 0, self.values) return self._wrap_result(result, axis=0) raise TypeError("decimals must be an integer")
Drop 2D from panel holding passed axis constant.
def dropna(self, axis=0, how='any', inplace=False): """ Drop 2D from panel, holding passed axis constant. Parameters ---------- axis : int, default 0 Axis to hold constant. E.g. axis=1 will drop major_axis entries having a certain amount of NA data how : {'all', 'any'}, default 'any' 'any': one or more values are NA in the DataFrame along the axis. For 'all' they all must be. inplace : bool, default False If True, do operation inplace and return None. Returns ------- dropped : Panel """ axis = self._get_axis_number(axis) values = self.values mask = notna(values) for ax in reversed(sorted(set(range(self._AXIS_LEN)) - {axis})): mask = mask.sum(ax) per_slice = np.prod(values.shape[:axis] + values.shape[axis + 1:]) if how == 'all': cond = mask > 0 else: cond = mask == per_slice new_ax = self._get_axis(axis)[cond] result = self.reindex_axis(new_ax, axis=axis) if inplace: self._update_inplace(result) else: return result
Return slice of panel along selected axis.
def xs(self, key, axis=1): """ Return slice of panel along selected axis. Parameters ---------- key : object Label axis : {'items', 'major', 'minor}, default 1/'major' Returns ------- y : ndim(self)-1 Notes ----- xs is only for getting, not setting values. MultiIndex Slicers is a generic way to get/set values on any level or levels and is a superset of xs functionality, see :ref:`MultiIndex Slicers <advanced.mi_slicers>` """ axis = self._get_axis_number(axis) if axis == 0: return self[key] self._consolidate_inplace() axis_number = self._get_axis_number(axis) new_data = self._data.xs(key, axis=axis_number, copy=False) result = self._construct_return_type(new_data) copy = new_data.is_mixed_type result._set_is_copy(self, copy=copy) return result
Parameters ---------- i: int slice or sequence of integers axis: int
def _ixs(self, i, axis=0): """ Parameters ---------- i : int, slice, or sequence of integers axis : int """ ax = self._get_axis(axis) key = ax[i] # xs cannot handle a non-scalar key, so just reindex here # if we have a multi-index and a single tuple, then its a reduction # (GH 7516) if not (isinstance(ax, MultiIndex) and isinstance(key, tuple)): if is_list_like(key): indexer = {self._get_axis_name(axis): key} return self.reindex(**indexer) # a reduction if axis == 0: values = self._data.iget(i) return self._box_item_values(key, values) # xs by position self._consolidate_inplace() new_data = self._data.xs(i, axis=axis, copy=True, takeable=True) return self._construct_return_type(new_data)
Transform wide format into long ( stacked ) format as DataFrame whose columns are the Panel s items and whose index is a MultiIndex formed of the Panel s major and minor axes.
def to_frame(self, filter_observations=True): """ Transform wide format into long (stacked) format as DataFrame whose columns are the Panel's items and whose index is a MultiIndex formed of the Panel's major and minor axes. Parameters ---------- filter_observations : boolean, default True Drop (major, minor) pairs without a complete set of observations across all the items Returns ------- y : DataFrame """ _, N, K = self.shape if filter_observations: # shaped like the return DataFrame mask = notna(self.values).all(axis=0) # size = mask.sum() selector = mask.ravel() else: # size = N * K selector = slice(None, None) data = {item: self[item].values.ravel()[selector] for item in self.items} def construct_multi_parts(idx, n_repeat, n_shuffle=1): # Replicates and shuffles MultiIndex, returns individual attributes codes = [np.repeat(x, n_repeat) for x in idx.codes] # Assumes that each label is divisible by n_shuffle codes = [x.reshape(n_shuffle, -1).ravel(order='F') for x in codes] codes = [x[selector] for x in codes] levels = idx.levels names = idx.names return codes, levels, names def construct_index_parts(idx, major=True): levels = [idx] if major: codes = [np.arange(N).repeat(K)[selector]] names = idx.name or 'major' else: codes = np.arange(K).reshape(1, K)[np.zeros(N, dtype=int)] codes = [codes.ravel()[selector]] names = idx.name or 'minor' names = [names] return codes, levels, names if isinstance(self.major_axis, MultiIndex): major_codes, major_levels, major_names = construct_multi_parts( self.major_axis, n_repeat=K) else: major_codes, major_levels, major_names = construct_index_parts( self.major_axis) if isinstance(self.minor_axis, MultiIndex): minor_codes, minor_levels, minor_names = construct_multi_parts( self.minor_axis, n_repeat=N, n_shuffle=K) else: minor_codes, minor_levels, minor_names = construct_index_parts( self.minor_axis, major=False) levels = major_levels + minor_levels codes = major_codes + minor_codes names = major_names + minor_names index = MultiIndex(levels=levels, codes=codes, names=names, verify_integrity=False) return DataFrame(data, index=index, columns=self.items)
Apply function along axis ( or axes ) of the Panel.
def apply(self, func, axis='major', **kwargs): """ Apply function along axis (or axes) of the Panel. Parameters ---------- func : function Function to apply to each combination of 'other' axes e.g. if axis = 'items', the combination of major_axis/minor_axis will each be passed as a Series; if axis = ('items', 'major'), DataFrames of items & major axis will be passed axis : {'items', 'minor', 'major'}, or {0, 1, 2}, or a tuple with two axes **kwargs Additional keyword arguments will be passed to the function. Returns ------- result : Panel, DataFrame, or Series Examples -------- Returns a Panel with the square root of each element >>> p = pd.Panel(np.random.rand(4, 3, 2)) # doctest: +SKIP >>> p.apply(np.sqrt) Equivalent to p.sum(1), returning a DataFrame >>> p.apply(lambda x: x.sum(), axis=1) # doctest: +SKIP Equivalent to previous: >>> p.apply(lambda x: x.sum(), axis='major') # doctest: +SKIP Return the shapes of each DataFrame over axis 2 (i.e the shapes of items x major), as a Series >>> p.apply(lambda x: x.shape, axis=(0,1)) # doctest: +SKIP """ if kwargs and not isinstance(func, np.ufunc): f = lambda x: func(x, **kwargs) else: f = func # 2d-slabs if isinstance(axis, (tuple, list)) and len(axis) == 2: return self._apply_2d(f, axis=axis) axis = self._get_axis_number(axis) # try ufunc like if isinstance(f, np.ufunc): try: with np.errstate(all='ignore'): result = np.apply_along_axis(func, axis, self.values) return self._wrap_result(result, axis=axis) except (AttributeError): pass # 1d return self._apply_1d(f, axis=axis)
Handle 2 - d slices equiv to iterating over the other axis.
def _apply_2d(self, func, axis): """ Handle 2-d slices, equiv to iterating over the other axis. """ ndim = self.ndim axis = [self._get_axis_number(a) for a in axis] # construct slabs, in 2-d this is a DataFrame result indexer_axis = list(range(ndim)) for a in axis: indexer_axis.remove(a) indexer_axis = indexer_axis[0] slicer = [slice(None, None)] * ndim ax = self._get_axis(indexer_axis) results = [] for i, e in enumerate(ax): slicer[indexer_axis] = i sliced = self.iloc[tuple(slicer)] obj = func(sliced) results.append((e, obj)) return self._construct_return_type(dict(results))
Return the type for the ndim of the result.
def _construct_return_type(self, result, axes=None): """ Return the type for the ndim of the result. """ ndim = getattr(result, 'ndim', None) # need to assume they are the same if ndim is None: if isinstance(result, dict): ndim = getattr(list(result.values())[0], 'ndim', 0) # have a dict, so top-level is +1 dim if ndim != 0: ndim += 1 # scalar if ndim == 0: return Series(result) # same as self elif self.ndim == ndim: # return the construction dictionary for these axes if axes is None: return self._constructor(result) return self._constructor(result, **self._construct_axes_dict()) # sliced elif self.ndim == ndim + 1: if axes is None: return self._constructor_sliced(result) return self._constructor_sliced( result, **self._extract_axes_for_slice(self, axes)) raise ValueError('invalid _construct_return_type [self->{self}] ' '[result->{result}]'.format(self=self, result=result))
Return number of observations over requested axis.
def count(self, axis='major'): """ Return number of observations over requested axis. Parameters ---------- axis : {'items', 'major', 'minor'} or {0, 1, 2} Returns ------- count : DataFrame """ i = self._get_axis_number(axis) values = self.values mask = np.isfinite(values) result = mask.sum(axis=i, dtype='int64') return self._wrap_result(result, axis)
Shift index by desired number of periods with an optional time freq.
def shift(self, periods=1, freq=None, axis='major'): """ Shift index by desired number of periods with an optional time freq. The shifted data will not include the dropped periods and the shifted axis will be smaller than the original. This is different from the behavior of DataFrame.shift() Parameters ---------- periods : int Number of periods to move, can be positive or negative freq : DateOffset, timedelta, or time rule string, optional axis : {'items', 'major', 'minor'} or {0, 1, 2} Returns ------- shifted : Panel """ if freq: return self.tshift(periods, freq, axis=axis) return super().slice_shift(periods, axis=axis)
Join items with other Panel either on major and minor axes column.
def join(self, other, how='left', lsuffix='', rsuffix=''): """ Join items with other Panel either on major and minor axes column. Parameters ---------- other : Panel or list of Panels Index should be similar to one of the columns in this one how : {'left', 'right', 'outer', 'inner'} How to handle indexes of the two objects. Default: 'left' for joining on index, None otherwise * left: use calling frame's index * right: use input frame's index * outer: form union of indexes * inner: use intersection of indexes lsuffix : string Suffix to use from left frame's overlapping columns rsuffix : string Suffix to use from right frame's overlapping columns Returns ------- joined : Panel """ from pandas.core.reshape.concat import concat if isinstance(other, Panel): join_major, join_minor = self._get_join_index(other, how) this = self.reindex(major=join_major, minor=join_minor) other = other.reindex(major=join_major, minor=join_minor) merged_data = this._data.merge(other._data, lsuffix, rsuffix) return self._constructor(merged_data) else: if lsuffix or rsuffix: raise ValueError('Suffixes not supported when passing ' 'multiple panels') if how == 'left': how = 'outer' join_axes = [self.major_axis, self.minor_axis] elif how == 'right': raise ValueError('Right join not supported with multiple ' 'panels') else: join_axes = None return concat([self] + list(other), axis=0, join=how, join_axes=join_axes, verify_integrity=True)
Modify Panel in place using non - NA values from other Panel.
def update(self, other, join='left', overwrite=True, filter_func=None, errors='ignore'): """ Modify Panel in place using non-NA values from other Panel. May also use object coercible to Panel. Will align on items. Parameters ---------- other : Panel, or object coercible to Panel The object from which the caller will be udpated. join : {'left', 'right', 'outer', 'inner'}, default 'left' How individual DataFrames are joined. overwrite : bool, default True If True then overwrite values for common keys in the calling Panel. filter_func : callable(1d-array) -> 1d-array<bool>, default None Can choose to replace values other than NA. Return True for values that should be updated. errors : {'raise', 'ignore'}, default 'ignore' If 'raise', will raise an error if a DataFrame and other both. .. versionchanged :: 0.24.0 Changed from `raise_conflict=False|True` to `errors='ignore'|'raise'`. See Also -------- DataFrame.update : Similar method for DataFrames. dict.update : Similar method for dictionaries. """ if not isinstance(other, self._constructor): other = self._constructor(other) axis_name = self._info_axis_name axis_values = self._info_axis other = other.reindex(**{axis_name: axis_values}) for frame in axis_values: self[frame].update(other[frame], join=join, overwrite=overwrite, filter_func=filter_func, errors=errors)
Return a list of the axis indices.
def _extract_axes(self, data, axes, **kwargs): """ Return a list of the axis indices. """ return [self._extract_axis(self, data, axis=i, **kwargs) for i, a in enumerate(axes)]
Return the slice dictionary for these axes.
def _extract_axes_for_slice(self, axes): """ Return the slice dictionary for these axes. """ return {self._AXIS_SLICEMAP[i]: a for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN - len(axes):], axes)}
Conform set of _constructor_sliced - like objects to either an intersection of indices/ columns or a union.
def _homogenize_dict(self, frames, intersect=True, dtype=None): """ Conform set of _constructor_sliced-like objects to either an intersection of indices / columns or a union. Parameters ---------- frames : dict intersect : boolean, default True Returns ------- dict of aligned results & indices """ result = dict() # caller differs dict/ODict, preserved type if isinstance(frames, OrderedDict): result = OrderedDict() adj_frames = OrderedDict() for k, v in frames.items(): if isinstance(v, dict): adj_frames[k] = self._constructor_sliced(v) else: adj_frames[k] = v axes = self._AXIS_ORDERS[1:] axes_dict = {a: ax for a, ax in zip(axes, self._extract_axes( self, adj_frames, axes, intersect=intersect))} reindex_dict = {self._AXIS_SLICEMAP[a]: axes_dict[a] for a in axes} reindex_dict['copy'] = False for key, frame in adj_frames.items(): if frame is not None: result[key] = frame.reindex(**reindex_dict) else: result[key] = None axes_dict['data'] = result axes_dict['dtype'] = dtype return axes_dict
For the particular label_list gets the offsets into the hypothetical list representing the totally ordered cartesian product of all possible label combinations * as long as * this space fits within int64 bounds ; otherwise though group indices identify unique combinations of labels they cannot be deconstructed. - If sort rank of returned ids preserve lexical ranks of labels. i. e. returned id s can be used to do lexical sort on labels ; - If xnull nulls ( - 1 labels ) are passed through.
def get_group_index(labels, shape, sort, xnull): """ For the particular label_list, gets the offsets into the hypothetical list representing the totally ordered cartesian product of all possible label combinations, *as long as* this space fits within int64 bounds; otherwise, though group indices identify unique combinations of labels, they cannot be deconstructed. - If `sort`, rank of returned ids preserve lexical ranks of labels. i.e. returned id's can be used to do lexical sort on labels; - If `xnull` nulls (-1 labels) are passed through. Parameters ---------- labels: sequence of arrays Integers identifying levels at each location shape: sequence of ints same length as labels Number of unique levels at each location sort: boolean If the ranks of returned ids should match lexical ranks of labels xnull: boolean If true nulls are excluded. i.e. -1 values in the labels are passed through Returns ------- An array of type int64 where two elements are equal if their corresponding labels are equal at all location. """ def _int64_cut_off(shape): acc = 1 for i, mul in enumerate(shape): acc *= int(mul) if not acc < _INT64_MAX: return i return len(shape) def maybe_lift(lab, size): # promote nan values (assigned -1 label in lab array) # so that all output values are non-negative return (lab + 1, size + 1) if (lab == -1).any() else (lab, size) labels = map(ensure_int64, labels) if not xnull: labels, shape = map(list, zip(*map(maybe_lift, labels, shape))) labels = list(labels) shape = list(shape) # Iteratively process all the labels in chunks sized so less # than _INT64_MAX unique int ids will be required for each chunk while True: # how many levels can be done without overflow: nlev = _int64_cut_off(shape) # compute flat ids for the first `nlev` levels stride = np.prod(shape[1:nlev], dtype='i8') out = stride * labels[0].astype('i8', subok=False, copy=False) for i in range(1, nlev): if shape[i] == 0: stride = 0 else: stride //= shape[i] out += labels[i] * stride if xnull: # exclude nulls mask = labels[0] == -1 for lab in labels[1:nlev]: mask |= lab == -1 out[mask] = -1 if nlev == len(shape): # all levels done! break # compress what has been done so far in order to avoid overflow # to retain lexical ranks, obs_ids should be sorted comp_ids, obs_ids = compress_group_index(out, sort=sort) labels = [comp_ids] + labels[nlev:] shape = [len(obs_ids)] + shape[nlev:] return out
reconstruct labels from observed group ids
def decons_obs_group_ids(comp_ids, obs_ids, shape, labels, xnull): """ reconstruct labels from observed group ids Parameters ---------- xnull: boolean, if nulls are excluded; i.e. -1 labels are passed through """ if not xnull: lift = np.fromiter(((a == -1).any() for a in labels), dtype='i8') shape = np.asarray(shape, dtype='i8') + lift if not is_int64_overflow_possible(shape): # obs ids are deconstructable! take the fast route! out = decons_group_index(obs_ids, shape) return out if xnull or not lift.any() \ else [x - y for x, y in zip(out, lift)] i = unique_label_indices(comp_ids) i8copy = lambda a: a.astype('i8', subok=False, copy=True) return [i8copy(lab[i]) for lab in labels]
This is intended to be a drop - in replacement for np. argsort which handles NaNs. It adds ascending and na_position parameters. GH #6399 #5231
def nargsort(items, kind='quicksort', ascending=True, na_position='last'): """ This is intended to be a drop-in replacement for np.argsort which handles NaNs. It adds ascending and na_position parameters. GH #6399, #5231 """ # specially handle Categorical if is_categorical_dtype(items): if na_position not in {'first', 'last'}: raise ValueError('invalid na_position: {!r}'.format(na_position)) mask = isna(items) cnt_null = mask.sum() sorted_idx = items.argsort(ascending=ascending, kind=kind) if ascending and na_position == 'last': # NaN is coded as -1 and is listed in front after sorting sorted_idx = np.roll(sorted_idx, -cnt_null) elif not ascending and na_position == 'first': # NaN is coded as -1 and is listed in the end after sorting sorted_idx = np.roll(sorted_idx, cnt_null) return sorted_idx with warnings.catch_warnings(): # https://github.com/pandas-dev/pandas/issues/25439 # can be removed once ExtensionArrays are properly handled by nargsort warnings.filterwarnings( "ignore", category=FutureWarning, message="Converting timezone-aware DatetimeArray to") items = np.asanyarray(items) idx = np.arange(len(items)) mask = isna(items) non_nans = items[~mask] non_nan_idx = idx[~mask] nan_idx = np.nonzero(mask)[0] if not ascending: non_nans = non_nans[::-1] non_nan_idx = non_nan_idx[::-1] indexer = non_nan_idx[non_nans.argsort(kind=kind)] if not ascending: indexer = indexer[::-1] # Finally, place the NaNs at the end or the beginning according to # na_position if na_position == 'last': indexer = np.concatenate([indexer, nan_idx]) elif na_position == 'first': indexer = np.concatenate([nan_idx, indexer]) else: raise ValueError('invalid na_position: {!r}'.format(na_position)) return indexer
return a diction of { labels } - > { indexers }
def get_indexer_dict(label_list, keys): """ return a diction of {labels} -> {indexers} """ shape = list(map(len, keys)) group_index = get_group_index(label_list, shape, sort=True, xnull=True) ngroups = ((group_index.size and group_index.max()) + 1) \ if is_int64_overflow_possible(shape) \ else np.prod(shape, dtype='i8') sorter = get_group_index_sorter(group_index, ngroups) sorted_labels = [lab.take(sorter) for lab in label_list] group_index = group_index.take(sorter) return lib.indices_fast(sorter, group_index, keys, sorted_labels)
algos. groupsort_indexer implements counting sort and it is at least O ( ngroups ) where ngroups = prod ( shape ) shape = map ( len keys ) that is linear in the number of combinations ( cartesian product ) of unique values of groupby keys. This can be huge when doing multi - key groupby. np. argsort ( kind = mergesort ) is O ( count x log ( count )) where count is the length of the data - frame ; Both algorithms are stable sort and that is necessary for correctness of groupby operations. e. g. consider: df. groupby ( key ) [ col ]. transform ( first )
def get_group_index_sorter(group_index, ngroups): """ algos.groupsort_indexer implements `counting sort` and it is at least O(ngroups), where ngroups = prod(shape) shape = map(len, keys) that is, linear in the number of combinations (cartesian product) of unique values of groupby keys. This can be huge when doing multi-key groupby. np.argsort(kind='mergesort') is O(count x log(count)) where count is the length of the data-frame; Both algorithms are `stable` sort and that is necessary for correctness of groupby operations. e.g. consider: df.groupby(key)[col].transform('first') """ count = len(group_index) alpha = 0.0 # taking complexities literally; there may be beta = 1.0 # some room for fine-tuning these parameters do_groupsort = (count > 0 and ((alpha + beta * ngroups) < (count * np.log(count)))) if do_groupsort: sorter, _ = algos.groupsort_indexer(ensure_int64(group_index), ngroups) return ensure_platform_int(sorter) else: return group_index.argsort(kind='mergesort')
Group_index is offsets into cartesian product of all possible labels. This space can be huge so this function compresses it by computing offsets ( comp_ids ) into the list of unique labels ( obs_group_ids ).
def compress_group_index(group_index, sort=True): """ Group_index is offsets into cartesian product of all possible labels. This space can be huge, so this function compresses it, by computing offsets (comp_ids) into the list of unique labels (obs_group_ids). """ size_hint = min(len(group_index), hashtable._SIZE_HINT_LIMIT) table = hashtable.Int64HashTable(size_hint) group_index = ensure_int64(group_index) # note, group labels come out ascending (ie, 1,2,3 etc) comp_ids, obs_group_ids = table.get_labels_groupby(group_index) if sort and len(obs_group_ids) > 0: obs_group_ids, comp_ids = _reorder_by_uniques(obs_group_ids, comp_ids) return comp_ids, obs_group_ids
Sort values and reorder corresponding labels. values should be unique if labels is not None. Safe for use with mixed types ( int str ) orders ints before strs.
def safe_sort(values, labels=None, na_sentinel=-1, assume_unique=False): """ Sort ``values`` and reorder corresponding ``labels``. ``values`` should be unique if ``labels`` is not None. Safe for use with mixed types (int, str), orders ints before strs. .. versionadded:: 0.19.0 Parameters ---------- values : list-like Sequence; must be unique if ``labels`` is not None. labels : list_like Indices to ``values``. All out of bound indices are treated as "not found" and will be masked with ``na_sentinel``. na_sentinel : int, default -1 Value in ``labels`` to mark "not found". Ignored when ``labels`` is None. assume_unique : bool, default False When True, ``values`` are assumed to be unique, which can speed up the calculation. Ignored when ``labels`` is None. Returns ------- ordered : ndarray Sorted ``values`` new_labels : ndarray Reordered ``labels``; returned when ``labels`` is not None. Raises ------ TypeError * If ``values`` is not list-like or if ``labels`` is neither None nor list-like * If ``values`` cannot be sorted ValueError * If ``labels`` is not None and ``values`` contain duplicates. """ if not is_list_like(values): raise TypeError("Only list-like objects are allowed to be passed to" "safe_sort as values") if not isinstance(values, np.ndarray): # don't convert to string types dtype, _ = infer_dtype_from_array(values) values = np.asarray(values, dtype=dtype) def sort_mixed(values): # order ints before strings, safe in py3 str_pos = np.array([isinstance(x, str) for x in values], dtype=bool) nums = np.sort(values[~str_pos]) strs = np.sort(values[str_pos]) return np.concatenate([nums, np.asarray(strs, dtype=object)]) sorter = None if lib.infer_dtype(values, skipna=False) == 'mixed-integer': # unorderable in py3 if mixed str/int ordered = sort_mixed(values) else: try: sorter = values.argsort() ordered = values.take(sorter) except TypeError: # try this anyway ordered = sort_mixed(values) # labels: if labels is None: return ordered if not is_list_like(labels): raise TypeError("Only list-like objects or None are allowed to be" "passed to safe_sort as labels") labels = ensure_platform_int(np.asarray(labels)) from pandas import Index if not assume_unique and not Index(values).is_unique: raise ValueError("values should be unique if labels is not None") if sorter is None: # mixed types (hash_klass, _), values = algorithms._get_data_algo( values, algorithms._hashtables) t = hash_klass(len(values)) t.map_locations(values) sorter = ensure_platform_int(t.lookup(ordered)) reverse_indexer = np.empty(len(sorter), dtype=np.int_) reverse_indexer.put(sorter, np.arange(len(sorter))) mask = (labels < -len(values)) | (labels >= len(values)) | \ (labels == na_sentinel) # (Out of bound indices will be masked with `na_sentinel` next, so we may # deal with them here without performance loss using `mode='wrap'`.) new_labels = reverse_indexer.take(labels, mode='wrap') np.putmask(new_labels, mask, na_sentinel) return ordered, ensure_platform_int(new_labels)
Attempt to prevent foot - shooting in a helpful way.
def _check_ne_builtin_clash(expr): """Attempt to prevent foot-shooting in a helpful way. Parameters ---------- terms : Term Terms can contain """ names = expr.names overlap = names & _ne_builtins if overlap: s = ', '.join(map(repr, overlap)) raise NumExprClobberingError('Variables in expression "{expr}" ' 'overlap with builtins: ({s})' .format(expr=expr, s=s))
Run the engine on the expression
def evaluate(self): """Run the engine on the expression This method performs alignment which is necessary no matter what engine is being used, thus its implementation is in the base class. Returns ------- obj : object The result of the passed expression. """ if not self._is_aligned: self.result_type, self.aligned_axes = _align(self.expr.terms) # make sure no names in resolvers and locals/globals clash res = self._evaluate() return _reconstruct_object(self.result_type, res, self.aligned_axes, self.expr.terms.return_type)
Find the appropriate Block subclass to use for the given values and dtype.
def get_block_type(values, dtype=None): """ Find the appropriate Block subclass to use for the given values and dtype. Parameters ---------- values : ndarray-like dtype : numpy or pandas dtype Returns ------- cls : class, subclass of Block """ dtype = dtype or values.dtype vtype = dtype.type if is_sparse(dtype): # Need this first(ish) so that Sparse[datetime] is sparse cls = ExtensionBlock elif is_categorical(values): cls = CategoricalBlock elif issubclass(vtype, np.datetime64): assert not is_datetime64tz_dtype(values) cls = DatetimeBlock elif is_datetime64tz_dtype(values): cls = DatetimeTZBlock elif is_interval_dtype(dtype) or is_period_dtype(dtype): cls = ObjectValuesExtensionBlock elif is_extension_array_dtype(values): cls = ExtensionBlock elif issubclass(vtype, np.floating): cls = FloatBlock elif issubclass(vtype, np.timedelta64): assert issubclass(vtype, np.integer) cls = TimeDeltaBlock elif issubclass(vtype, np.complexfloating): cls = ComplexBlock elif issubclass(vtype, np.integer): cls = IntBlock elif dtype == np.bool_: cls = BoolBlock else: cls = ObjectBlock return cls
return a new extended blocks givin the result
def _extend_blocks(result, blocks=None): """ return a new extended blocks, givin the result """ from pandas.core.internals import BlockManager if blocks is None: blocks = [] if isinstance(result, list): for r in result: if isinstance(r, list): blocks.extend(r) else: blocks.append(r) elif isinstance(result, BlockManager): blocks.extend(result.blocks) else: blocks.append(result) return blocks
guarantee the shape of the values to be at least 1 d
def _block_shape(values, ndim=1, shape=None): """ guarantee the shape of the values to be at least 1 d """ if values.ndim < ndim: if shape is None: shape = values.shape if not is_extension_array_dtype(values): # TODO: https://github.com/pandas-dev/pandas/issues/23023 # block.shape is incorrect for "2D" ExtensionArrays # We can't, and don't need to, reshape. values = values.reshape(tuple((1, ) + shape)) return values
If possible reshape arr to have shape new_shape with a couple of exceptions ( see gh - 13012 ):
def _safe_reshape(arr, new_shape): """ If possible, reshape `arr` to have shape `new_shape`, with a couple of exceptions (see gh-13012): 1) If `arr` is a ExtensionArray or Index, `arr` will be returned as is. 2) If `arr` is a Series, the `_values` attribute will be reshaped and returned. Parameters ---------- arr : array-like, object to be reshaped new_shape : int or tuple of ints, the new shape """ if isinstance(arr, ABCSeries): arr = arr._values if not isinstance(arr, ABCExtensionArray): arr = arr.reshape(new_shape) return arr
Return a new ndarray try to preserve dtype if possible.
def _putmask_smart(v, m, n): """ Return a new ndarray, try to preserve dtype if possible. Parameters ---------- v : `values`, updated in-place (array like) m : `mask`, applies to both sides (array like) n : `new values` either scalar or an array like aligned with `values` Returns ------- values : ndarray with updated values this *may* be a copy of the original See Also -------- ndarray.putmask """ # we cannot use np.asarray() here as we cannot have conversions # that numpy does when numeric are mixed with strings # n should be the length of the mask or a scalar here if not is_list_like(n): n = np.repeat(n, len(m)) elif isinstance(n, np.ndarray) and n.ndim == 0: # numpy scalar n = np.repeat(np.array(n, ndmin=1), len(m)) # see if we are only masking values that if putted # will work in the current dtype try: nn = n[m] # make sure that we have a nullable type # if we have nulls if not _isna_compat(v, nn[0]): raise ValueError # we ignore ComplexWarning here with warnings.catch_warnings(record=True): warnings.simplefilter("ignore", np.ComplexWarning) nn_at = nn.astype(v.dtype) # avoid invalid dtype comparisons # between numbers & strings # only compare integers/floats # don't compare integers to datetimelikes if (not is_numeric_v_string_like(nn, nn_at) and (is_float_dtype(nn.dtype) or is_integer_dtype(nn.dtype) and is_float_dtype(nn_at.dtype) or is_integer_dtype(nn_at.dtype))): comp = (nn == nn_at) if is_list_like(comp) and comp.all(): nv = v.copy() nv[m] = nn_at return nv except (ValueError, IndexError, TypeError, OverflowError): pass n = np.asarray(n) def _putmask_preserve(nv, n): try: nv[m] = n[m] except (IndexError, ValueError): nv[m] = n return nv # preserves dtype if possible if v.dtype.kind == n.dtype.kind: return _putmask_preserve(v, n) # change the dtype if needed dtype, _ = maybe_promote(n.dtype) if is_extension_type(v.dtype) and is_object_dtype(dtype): v = v.get_values(dtype) else: v = v.astype(dtype) return _putmask_preserve(v, n)
ndim inference and validation.
def _check_ndim(self, values, ndim): """ ndim inference and validation. Infers ndim from 'values' if not provided to __init__. Validates that values.ndim and ndim are consistent if and only if the class variable '_validate_ndim' is True. Parameters ---------- values : array-like ndim : int or None Returns ------- ndim : int Raises ------ ValueError : the number of dimensions do not match """ if ndim is None: ndim = values.ndim if self._validate_ndim and values.ndim != ndim: msg = ("Wrong number of dimensions. values.ndim != ndim " "[{} != {}]") raise ValueError(msg.format(values.ndim, ndim)) return ndim
validate that we have a astypeable to categorical returns a boolean if we are a categorical
def is_categorical_astype(self, dtype): """ validate that we have a astypeable to categorical, returns a boolean if we are a categorical """ if dtype is Categorical or dtype is CategoricalDtype: # this is a pd.Categorical, but is not # a valid type for astypeing raise TypeError("invalid type {0} for astype".format(dtype)) elif is_categorical_dtype(dtype): return True return False
return an internal format currently just the ndarray this is often overridden to handle to_dense like operations
def get_values(self, dtype=None): """ return an internal format, currently just the ndarray this is often overridden to handle to_dense like operations """ if is_object_dtype(dtype): return self.values.astype(object) return self.values
Create a new block with type inference propagate any values that are not specified
def make_block(self, values, placement=None, ndim=None): """ Create a new block, with type inference propagate any values that are not specified """ if placement is None: placement = self.mgr_locs if ndim is None: ndim = self.ndim return make_block(values, placement=placement, ndim=ndim)
Wrap given values in a block of same type as self.
def make_block_same_class(self, values, placement=None, ndim=None, dtype=None): """ Wrap given values in a block of same type as self. """ if dtype is not None: # issue 19431 fastparquet is passing this warnings.warn("dtype argument is deprecated, will be removed " "in a future release.", DeprecationWarning) if placement is None: placement = self.mgr_locs return make_block(values, placement=placement, ndim=ndim, klass=self.__class__, dtype=dtype)
Perform __getitem__ - like return result as block.
def getitem_block(self, slicer, new_mgr_locs=None): """ Perform __getitem__-like, return result as block. As of now, only supports slices that preserve dimensionality. """ if new_mgr_locs is None: if isinstance(slicer, tuple): axis0_slicer = slicer[0] else: axis0_slicer = slicer new_mgr_locs = self.mgr_locs[axis0_slicer] new_values = self._slice(slicer) if self._validate_ndim and new_values.ndim != self.ndim: raise ValueError("Only same dim slicing is allowed") return self.make_block_same_class(new_values, new_mgr_locs)
Concatenate list of single blocks of the same type.
def concat_same_type(self, to_concat, placement=None): """ Concatenate list of single blocks of the same type. """ values = self._concatenator([blk.values for blk in to_concat], axis=self.ndim - 1) return self.make_block_same_class( values, placement=placement or slice(0, len(values), 1))
Delete given loc ( - s ) from block in - place.
def delete(self, loc): """ Delete given loc(-s) from block in-place. """ self.values = np.delete(self.values, loc, 0) self.mgr_locs = self.mgr_locs.delete(loc)
apply the function to my values ; return a block if we are not one
def apply(self, func, **kwargs): """ apply the function to my values; return a block if we are not one """ with np.errstate(all='ignore'): result = func(self.values, **kwargs) if not isinstance(result, Block): result = self.make_block(values=_block_shape(result, ndim=self.ndim)) return result
fillna on the block with the value. If we fail then convert to ObjectBlock and try again
def fillna(self, value, limit=None, inplace=False, downcast=None): """ fillna on the block with the value. If we fail, then convert to ObjectBlock and try again """ inplace = validate_bool_kwarg(inplace, 'inplace') if not self._can_hold_na: if inplace: return self else: return self.copy() mask = isna(self.values) if limit is not None: if not is_integer(limit): raise ValueError('Limit must be an integer') if limit < 1: raise ValueError('Limit must be greater than 0') if self.ndim > 2: raise NotImplementedError("number of dimensions for 'fillna' " "is currently limited to 2") mask[mask.cumsum(self.ndim - 1) > limit] = False # fillna, but if we cannot coerce, then try again as an ObjectBlock try: values, _ = self._try_coerce_args(self.values, value) blocks = self.putmask(mask, value, inplace=inplace) blocks = [b.make_block(values=self._try_coerce_result(b.values)) for b in blocks] return self._maybe_downcast(blocks, downcast) except (TypeError, ValueError): # we can't process the value, but nothing to do if not mask.any(): return self if inplace else self.copy() # operate column-by-column def f(m, v, i): block = self.coerce_to_target_dtype(value) # slice out our block if i is not None: block = block.getitem_block(slice(i, i + 1)) return block.fillna(value, limit=limit, inplace=inplace, downcast=None) return self.split_and_operate(mask, f, inplace)
split the block per - column and apply the callable f per - column return a new block for each. Handle masking which will not change a block unless needed.
def split_and_operate(self, mask, f, inplace): """ split the block per-column, and apply the callable f per-column, return a new block for each. Handle masking which will not change a block unless needed. Parameters ---------- mask : 2-d boolean mask f : callable accepting (1d-mask, 1d values, indexer) inplace : boolean Returns ------- list of blocks """ if mask is None: mask = np.ones(self.shape, dtype=bool) new_values = self.values def make_a_block(nv, ref_loc): if isinstance(nv, Block): block = nv elif isinstance(nv, list): block = nv[0] else: # Put back the dimension that was taken from it and make # a block out of the result. try: nv = _block_shape(nv, ndim=self.ndim) except (AttributeError, NotImplementedError): pass block = self.make_block(values=nv, placement=ref_loc) return block # ndim == 1 if self.ndim == 1: if mask.any(): nv = f(mask, new_values, None) else: nv = new_values if inplace else new_values.copy() block = make_a_block(nv, self.mgr_locs) return [block] # ndim > 1 new_blocks = [] for i, ref_loc in enumerate(self.mgr_locs): m = mask[i] v = new_values[i] # need a new block if m.any(): nv = f(m, v, i) else: nv = v if inplace else v.copy() block = make_a_block(nv, [ref_loc]) new_blocks.append(block) return new_blocks
try to downcast each item to the dict of dtypes if present
def downcast(self, dtypes=None): """ try to downcast each item to the dict of dtypes if present """ # turn it off completely if dtypes is False: return self values = self.values # single block handling if self._is_single_block: # try to cast all non-floats here if dtypes is None: dtypes = 'infer' nv = maybe_downcast_to_dtype(values, dtypes) return self.make_block(nv) # ndim > 1 if dtypes is None: return self if not (dtypes == 'infer' or isinstance(dtypes, dict)): raise ValueError("downcast must have a dictionary or 'infer' as " "its argument") # operate column-by-column # this is expensive as it splits the blocks items-by-item def f(m, v, i): if dtypes == 'infer': dtype = 'infer' else: raise AssertionError("dtypes as dict is not supported yet") if dtype is not None: v = maybe_downcast_to_dtype(v, dtype) return v return self.split_and_operate(None, f, False)
Coerce to the new type
def _astype(self, dtype, copy=False, errors='raise', values=None, **kwargs): """Coerce to the new type Parameters ---------- dtype : str, dtype convertible copy : boolean, default False copy if indicated errors : str, {'raise', 'ignore'}, default 'ignore' - ``raise`` : allow exceptions to be raised - ``ignore`` : suppress exceptions. On error return original object Returns ------- Block """ errors_legal_values = ('raise', 'ignore') if errors not in errors_legal_values: invalid_arg = ("Expected value of kwarg 'errors' to be one of {}. " "Supplied value is '{}'".format( list(errors_legal_values), errors)) raise ValueError(invalid_arg) if (inspect.isclass(dtype) and issubclass(dtype, (PandasExtensionDtype, ExtensionDtype))): msg = ("Expected an instance of {}, but got the class instead. " "Try instantiating 'dtype'.".format(dtype.__name__)) raise TypeError(msg) # may need to convert to categorical if self.is_categorical_astype(dtype): # deprecated 17636 if ('categories' in kwargs or 'ordered' in kwargs): if isinstance(dtype, CategoricalDtype): raise TypeError( "Cannot specify a CategoricalDtype and also " "`categories` or `ordered`. Use " "`dtype=CategoricalDtype(categories, ordered)`" " instead.") warnings.warn("specifying 'categories' or 'ordered' in " ".astype() is deprecated; pass a " "CategoricalDtype instead", FutureWarning, stacklevel=7) categories = kwargs.get('categories', None) ordered = kwargs.get('ordered', None) if com._any_not_none(categories, ordered): dtype = CategoricalDtype(categories, ordered) if is_categorical_dtype(self.values): # GH 10696/18593: update an existing categorical efficiently return self.make_block(self.values.astype(dtype, copy=copy)) return self.make_block(Categorical(self.values, dtype=dtype)) dtype = pandas_dtype(dtype) # astype processing if is_dtype_equal(self.dtype, dtype): if copy: return self.copy() return self try: # force the copy here if values is None: if self.is_extension: values = self.values.astype(dtype) else: if issubclass(dtype.type, str): # use native type formatting for datetime/tz/timedelta if self.is_datelike: values = self.to_native_types() # astype formatting else: values = self.get_values() else: values = self.get_values(dtype=dtype) # _astype_nansafe works fine with 1-d only values = astype_nansafe(values.ravel(), dtype, copy=True) # TODO(extension) # should we make this attribute? try: values = values.reshape(self.shape) except AttributeError: pass newb = make_block(values, placement=self.mgr_locs, ndim=self.ndim) except Exception: # noqa: E722 if errors == 'raise': raise newb = self.copy() if copy else self if newb.is_numeric and self.is_numeric: if newb.shape != self.shape: raise TypeError( "cannot set astype for copy = [{copy}] for dtype " "({dtype} [{shape}]) to different shape " "({newb_dtype} [{newb_shape}])".format( copy=copy, dtype=self.dtype.name, shape=self.shape, newb_dtype=newb.dtype.name, newb_shape=newb.shape)) return newb
require the same dtype as ourselves
def _can_hold_element(self, element): """ require the same dtype as ourselves """ dtype = self.values.dtype.type tipo = maybe_infer_dtype_type(element) if tipo is not None: return issubclass(tipo.type, dtype) return isinstance(element, dtype)
try to cast the result to our original type we may have roundtripped thru object in the mean - time
def _try_cast_result(self, result, dtype=None): """ try to cast the result to our original type, we may have roundtripped thru object in the mean-time """ if dtype is None: dtype = self.dtype if self.is_integer or self.is_bool or self.is_datetime: pass elif self.is_float and result.dtype == self.dtype: # protect against a bool/object showing up here if isinstance(dtype, str) and dtype == 'infer': return result if not isinstance(dtype, type): dtype = dtype.type if issubclass(dtype, (np.bool_, np.object_)): if issubclass(dtype, np.bool_): if isna(result).all(): return result.astype(np.bool_) else: result = result.astype(np.object_) result[result == 1] = True result[result == 0] = False return result else: return result.astype(np.object_) return result # may need to change the dtype here return maybe_downcast_to_dtype(result, dtype)
provide coercion to our input arguments
def _try_coerce_args(self, values, other): """ provide coercion to our input arguments """ if np.any(notna(other)) and not self._can_hold_element(other): # coercion issues # let higher levels handle raise TypeError("cannot convert {} to an {}".format( type(other).__name__, type(self).__name__.lower().replace('Block', ''))) return values, other
convert to our native types format slicing if desired
def to_native_types(self, slicer=None, na_rep='nan', quoting=None, **kwargs): """ convert to our native types format, slicing if desired """ values = self.get_values() if slicer is not None: values = values[:, slicer] mask = isna(values) if not self.is_object and not quoting: values = values.astype(str) else: values = np.array(values, dtype='object') values[mask] = na_rep return values
copy constructor
def copy(self, deep=True): """ copy constructor """ values = self.values if deep: values = values.copy() return self.make_block_same_class(values, ndim=self.ndim)
replace the to_replace value with value possible to create new blocks here this is just a call to putmask. regex is not used here. It is used in ObjectBlocks. It is here for API compatibility.
def replace(self, to_replace, value, inplace=False, filter=None, regex=False, convert=True): """replace the to_replace value with value, possible to create new blocks here this is just a call to putmask. regex is not used here. It is used in ObjectBlocks. It is here for API compatibility. """ inplace = validate_bool_kwarg(inplace, 'inplace') original_to_replace = to_replace # try to replace, if we raise an error, convert to ObjectBlock and # retry try: values, to_replace = self._try_coerce_args(self.values, to_replace) mask = missing.mask_missing(values, to_replace) if filter is not None: filtered_out = ~self.mgr_locs.isin(filter) mask[filtered_out.nonzero()[0]] = False blocks = self.putmask(mask, value, inplace=inplace) if convert: blocks = [b.convert(by_item=True, numeric=False, copy=not inplace) for b in blocks] return blocks except (TypeError, ValueError): # GH 22083, TypeError or ValueError occurred within error handling # causes infinite loop. Cast and retry only if not objectblock. if is_object_dtype(self): raise # try again with a compatible block block = self.astype(object) return block.replace(to_replace=original_to_replace, value=value, inplace=inplace, filter=filter, regex=regex, convert=convert)
Set the value inplace returning a a maybe different typed block.
def setitem(self, indexer, value): """Set the value inplace, returning a a maybe different typed block. Parameters ---------- indexer : tuple, list-like, array-like, slice The subset of self.values to set value : object The value being set Returns ------- Block Notes ----- `indexer` is a direct slice/positional indexer. `value` must be a compatible shape. """ # coerce None values, if appropriate if value is None: if self.is_numeric: value = np.nan # coerce if block dtype can store value values = self.values try: values, value = self._try_coerce_args(values, value) # can keep its own dtype if hasattr(value, 'dtype') and is_dtype_equal(values.dtype, value.dtype): dtype = self.dtype else: dtype = 'infer' except (TypeError, ValueError): # current dtype cannot store value, coerce to common dtype find_dtype = False if hasattr(value, 'dtype'): dtype = value.dtype find_dtype = True elif lib.is_scalar(value): if isna(value): # NaN promotion is handled in latter path dtype = False else: dtype, _ = infer_dtype_from_scalar(value, pandas_dtype=True) find_dtype = True else: dtype = 'infer' if find_dtype: dtype = find_common_type([values.dtype, dtype]) if not is_dtype_equal(self.dtype, dtype): b = self.astype(dtype) return b.setitem(indexer, value) # value must be storeable at this moment arr_value = np.array(value) # cast the values to a type that can hold nan (if necessary) if not self._can_hold_element(value): dtype, _ = maybe_promote(arr_value.dtype) values = values.astype(dtype) transf = (lambda x: x.T) if self.ndim == 2 else (lambda x: x) values = transf(values) # length checking check_setitem_lengths(indexer, value, values) def _is_scalar_indexer(indexer): # return True if we are all scalar indexers if arr_value.ndim == 1: if not isinstance(indexer, tuple): indexer = tuple([indexer]) return any(isinstance(idx, np.ndarray) and len(idx) == 0 for idx in indexer) return False def _is_empty_indexer(indexer): # return a boolean if we have an empty indexer if is_list_like(indexer) and not len(indexer): return True if arr_value.ndim == 1: if not isinstance(indexer, tuple): indexer = tuple([indexer]) return any(isinstance(idx, np.ndarray) and len(idx) == 0 for idx in indexer) return False # empty indexers # 8669 (empty) if _is_empty_indexer(indexer): pass # setting a single element for each dim and with a rhs that could # be say a list # GH 6043 elif _is_scalar_indexer(indexer): values[indexer] = value # if we are an exact match (ex-broadcasting), # then use the resultant dtype elif (len(arr_value.shape) and arr_value.shape[0] == values.shape[0] and np.prod(arr_value.shape) == np.prod(values.shape)): values[indexer] = value try: values = values.astype(arr_value.dtype) except ValueError: pass # set else: values[indexer] = value # coerce and try to infer the dtypes of the result values = self._try_coerce_and_cast_result(values, dtype) block = self.make_block(transf(values)) return block
putmask the data to the block ; it is possible that we may create a new dtype of block
def putmask(self, mask, new, align=True, inplace=False, axis=0, transpose=False): """ putmask the data to the block; it is possible that we may create a new dtype of block return the resulting block(s) Parameters ---------- mask : the condition to respect new : a ndarray/object align : boolean, perform alignment on other/cond, default is True inplace : perform inplace modification, default is False axis : int transpose : boolean Set to True if self is stored with axes reversed Returns ------- a list of new blocks, the result of the putmask """ new_values = self.values if inplace else self.values.copy() new = getattr(new, 'values', new) mask = getattr(mask, 'values', mask) # if we are passed a scalar None, convert it here if not is_list_like(new) and isna(new) and not self.is_object: new = self.fill_value if self._can_hold_element(new): _, new = self._try_coerce_args(new_values, new) if transpose: new_values = new_values.T # If the default repeat behavior in np.putmask would go in the # wrong direction, then explicitly repeat and reshape new instead if getattr(new, 'ndim', 0) >= 1: if self.ndim - 1 == new.ndim and axis == 1: new = np.repeat( new, new_values.shape[-1]).reshape(self.shape) new = new.astype(new_values.dtype) # we require exact matches between the len of the # values we are setting (or is compat). np.putmask # doesn't check this and will simply truncate / pad # the output, but we want sane error messages # # TODO: this prob needs some better checking # for 2D cases if ((is_list_like(new) and np.any(mask[mask]) and getattr(new, 'ndim', 1) == 1)): if not (mask.shape[-1] == len(new) or mask[mask].shape[-1] == len(new) or len(new) == 1): raise ValueError("cannot assign mismatch " "length to masked array") np.putmask(new_values, mask, new) # maybe upcast me elif mask.any(): if transpose: mask = mask.T if isinstance(new, np.ndarray): new = new.T axis = new_values.ndim - axis - 1 # Pseudo-broadcast if getattr(new, 'ndim', 0) >= 1: if self.ndim - 1 == new.ndim: new_shape = list(new.shape) new_shape.insert(axis, 1) new = new.reshape(tuple(new_shape)) # operate column-by-column def f(m, v, i): if i is None: # ndim==1 case. n = new else: if isinstance(new, np.ndarray): n = np.squeeze(new[i % new.shape[0]]) else: n = np.array(new) # type of the new block dtype, _ = maybe_promote(n.dtype) # we need to explicitly astype here to make a copy n = n.astype(dtype) nv = _putmask_smart(v, m, n) return nv new_blocks = self.split_and_operate(mask, f, inplace) return new_blocks if inplace: return [self] if transpose: new_values = new_values.T return [self.make_block(new_values)]
coerce the current block to a dtype compat for other we will return a block possibly object and not raise
def coerce_to_target_dtype(self, other): """ coerce the current block to a dtype compat for other we will return a block, possibly object, and not raise we can also safely try to coerce to the same dtype and will receive the same block """ # if we cannot then coerce to object dtype, _ = infer_dtype_from(other, pandas_dtype=True) if is_dtype_equal(self.dtype, dtype): return self if self.is_bool or is_object_dtype(dtype) or is_bool_dtype(dtype): # we don't upcast to bool return self.astype(object) elif ((self.is_float or self.is_complex) and (is_integer_dtype(dtype) or is_float_dtype(dtype))): # don't coerce float/complex to int return self elif (self.is_datetime or is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype)): # not a datetime if not ((is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype)) and self.is_datetime): return self.astype(object) # don't upcast timezone with different timezone or no timezone mytz = getattr(self.dtype, 'tz', None) othertz = getattr(dtype, 'tz', None) if str(mytz) != str(othertz): return self.astype(object) raise AssertionError("possible recursion in " "coerce_to_target_dtype: {} {}".format( self, other)) elif (self.is_timedelta or is_timedelta64_dtype(dtype)): # not a timedelta if not (is_timedelta64_dtype(dtype) and self.is_timedelta): return self.astype(object) raise AssertionError("possible recursion in " "coerce_to_target_dtype: {} {}".format( self, other)) try: return self.astype(dtype) except (ValueError, TypeError, OverflowError): pass return self.astype(object)
fillna but using the interpolate machinery
def _interpolate_with_fill(self, method='pad', axis=0, inplace=False, limit=None, fill_value=None, coerce=False, downcast=None): """ fillna but using the interpolate machinery """ inplace = validate_bool_kwarg(inplace, 'inplace') # if we are coercing, then don't force the conversion # if the block can't hold the type if coerce: if not self._can_hold_na: if inplace: return [self] else: return [self.copy()] values = self.values if inplace else self.values.copy() values, fill_value = self._try_coerce_args(values, fill_value) values = missing.interpolate_2d(values, method=method, axis=axis, limit=limit, fill_value=fill_value, dtype=self.dtype) values = self._try_coerce_result(values) blocks = [self.make_block_same_class(values, ndim=self.ndim)] return self._maybe_downcast(blocks, downcast)
interpolate using scipy wrappers
def _interpolate(self, method=None, index=None, values=None, fill_value=None, axis=0, limit=None, limit_direction='forward', limit_area=None, inplace=False, downcast=None, **kwargs): """ interpolate using scipy wrappers """ inplace = validate_bool_kwarg(inplace, 'inplace') data = self.values if inplace else self.values.copy() # only deal with floats if not self.is_float: if not self.is_integer: return self data = data.astype(np.float64) if fill_value is None: fill_value = self.fill_value if method in ('krogh', 'piecewise_polynomial', 'pchip'): if not index.is_monotonic: raise ValueError("{0} interpolation requires that the " "index be monotonic.".format(method)) # process 1-d slices in the axis direction def func(x): # process a 1-d slice, returning it # should the axis argument be handled below in apply_along_axis? # i.e. not an arg to missing.interpolate_1d return missing.interpolate_1d(index, x, method=method, limit=limit, limit_direction=limit_direction, limit_area=limit_area, fill_value=fill_value, bounds_error=False, **kwargs) # interp each column independently interp_values = np.apply_along_axis(func, axis, data) blocks = [self.make_block_same_class(interp_values)] return self._maybe_downcast(blocks, downcast)
Take values according to indexer and return them as a block. bb
def take_nd(self, indexer, axis, new_mgr_locs=None, fill_tuple=None): """ Take values according to indexer and return them as a block.bb """ # algos.take_nd dispatches for DatetimeTZBlock, CategoricalBlock # so need to preserve types # sparse is treated like an ndarray, but needs .get_values() shaping values = self.values if self.is_sparse: values = self.get_values() if fill_tuple is None: fill_value = self.fill_value new_values = algos.take_nd(values, indexer, axis=axis, allow_fill=False, fill_value=fill_value) else: fill_value = fill_tuple[0] new_values = algos.take_nd(values, indexer, axis=axis, allow_fill=True, fill_value=fill_value) if new_mgr_locs is None: if axis == 0: slc = libinternals.indexer_as_slice(indexer) if slc is not None: new_mgr_locs = self.mgr_locs[slc] else: new_mgr_locs = self.mgr_locs[indexer] else: new_mgr_locs = self.mgr_locs if not is_dtype_equal(new_values.dtype, self.dtype): return self.make_block(new_values, new_mgr_locs) else: return self.make_block_same_class(new_values, new_mgr_locs)
return block for the diff of the values
def diff(self, n, axis=1): """ return block for the diff of the values """ new_values = algos.diff(self.values, n, axis=axis) return [self.make_block(values=new_values)]
shift the block by periods possibly upcast
def shift(self, periods, axis=0, fill_value=None): """ shift the block by periods, possibly upcast """ # convert integer to float if necessary. need to do a lot more than # that, handle boolean etc also new_values, fill_value = maybe_upcast(self.values, fill_value) # make sure array sent to np.roll is c_contiguous f_ordered = new_values.flags.f_contiguous if f_ordered: new_values = new_values.T axis = new_values.ndim - axis - 1 if np.prod(new_values.shape): new_values = np.roll(new_values, ensure_platform_int(periods), axis=axis) axis_indexer = [slice(None)] * self.ndim if periods > 0: axis_indexer[axis] = slice(None, periods) else: axis_indexer[axis] = slice(periods, None) new_values[tuple(axis_indexer)] = fill_value # restore original order if f_ordered: new_values = new_values.T return [self.make_block(new_values)]
evaluate the block ; return result block ( s ) from the result
def where(self, other, cond, align=True, errors='raise', try_cast=False, axis=0, transpose=False): """ evaluate the block; return result block(s) from the result Parameters ---------- other : a ndarray/object cond : the condition to respect align : boolean, perform alignment on other/cond errors : str, {'raise', 'ignore'}, default 'raise' - ``raise`` : allow exceptions to be raised - ``ignore`` : suppress exceptions. On error return original object axis : int transpose : boolean Set to True if self is stored with axes reversed Returns ------- a new block(s), the result of the func """ import pandas.core.computation.expressions as expressions assert errors in ['raise', 'ignore'] values = self.values orig_other = other if transpose: values = values.T other = getattr(other, '_values', getattr(other, 'values', other)) cond = getattr(cond, 'values', cond) # If the default broadcasting would go in the wrong direction, then # explicitly reshape other instead if getattr(other, 'ndim', 0) >= 1: if values.ndim - 1 == other.ndim and axis == 1: other = other.reshape(tuple(other.shape + (1, ))) elif transpose and values.ndim == self.ndim - 1: cond = cond.T if not hasattr(cond, 'shape'): raise ValueError("where must have a condition that is ndarray " "like") # our where function def func(cond, values, other): if cond.ravel().all(): return values values, other = self._try_coerce_args(values, other) try: return self._try_coerce_result(expressions.where( cond, values, other)) except Exception as detail: if errors == 'raise': raise TypeError( 'Could not operate [{other!r}] with block values ' '[{detail!s}]'.format(other=other, detail=detail)) else: # return the values result = np.empty(values.shape, dtype='float64') result.fill(np.nan) return result # see if we can operate on the entire block, or need item-by-item # or if we are a single block (ndim == 1) try: result = func(cond, values, other) except TypeError: # we cannot coerce, return a compat dtype # we are explicitly ignoring errors block = self.coerce_to_target_dtype(other) blocks = block.where(orig_other, cond, align=align, errors=errors, try_cast=try_cast, axis=axis, transpose=transpose) return self._maybe_downcast(blocks, 'infer') if self._can_hold_na or self.ndim == 1: if transpose: result = result.T # try to cast if requested if try_cast: result = self._try_cast_result(result) return self.make_block(result) # might need to separate out blocks axis = cond.ndim - 1 cond = cond.swapaxes(axis, 0) mask = np.array([cond[i].all() for i in range(cond.shape[0])], dtype=bool) result_blocks = [] for m in [mask, ~mask]: if m.any(): r = self._try_cast_result(result.take(m.nonzero()[0], axis=axis)) result_blocks.append( self.make_block(r.T, placement=self.mgr_locs[m])) return result_blocks
Return a list of unstacked blocks of self
def _unstack(self, unstacker_func, new_columns, n_rows, fill_value): """Return a list of unstacked blocks of self Parameters ---------- unstacker_func : callable Partially applied unstacker. new_columns : Index All columns of the unstacked BlockManager. n_rows : int Only used in ExtensionBlock.unstack fill_value : int Only used in ExtensionBlock.unstack Returns ------- blocks : list of Block New blocks of unstacked values. mask : array_like of bool The mask of columns of `blocks` we should keep. """ unstacker = unstacker_func(self.values.T) new_items = unstacker.get_new_columns() new_placement = new_columns.get_indexer(new_items) new_values, mask = unstacker.get_new_values() mask = mask.any(0) new_values = new_values.T[mask] new_placement = new_placement[mask] blocks = [make_block(new_values, placement=new_placement)] return blocks, mask
compute the quantiles of the
def quantile(self, qs, interpolation='linear', axis=0): """ compute the quantiles of the Parameters ---------- qs: a scalar or list of the quantiles to be computed interpolation: type of interpolation, default 'linear' axis: axis to compute, default 0 Returns ------- Block """ if self.is_datetimetz: # TODO: cleanup this special case. # We need to operate on i8 values for datetimetz # but `Block.get_values()` returns an ndarray of objects # right now. We need an API for "values to do numeric-like ops on" values = self.values.asi8 # TODO: NonConsolidatableMixin shape # Usual shape inconsistencies for ExtensionBlocks if self.ndim > 1: values = values[None, :] else: values = self.get_values() values, _ = self._try_coerce_args(values, values) is_empty = values.shape[axis] == 0 orig_scalar = not is_list_like(qs) if orig_scalar: # make list-like, unpack later qs = [qs] if is_empty: if self.ndim == 1: result = self._na_value else: # create the array of na_values # 2d len(values) * len(qs) result = np.repeat(np.array([self.fill_value] * len(qs)), len(values)).reshape(len(values), len(qs)) else: # asarray needed for Sparse, see GH#24600 # TODO: Why self.values and not values? mask = np.asarray(isna(self.values)) result = nanpercentile(values, np.array(qs) * 100, axis=axis, na_value=self.fill_value, mask=mask, ndim=self.ndim, interpolation=interpolation) result = np.array(result, copy=False) if self.ndim > 1: result = result.T if orig_scalar and not lib.is_scalar(result): # result could be scalar in case with is_empty and self.ndim == 1 assert result.shape[-1] == 1, result.shape result = result[..., 0] result = lib.item_from_zerodim(result) ndim = getattr(result, 'ndim', None) or 0 result = self._try_coerce_result(result) return make_block(result, placement=np.arange(len(result)), ndim=ndim)