title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
Backport PR #54625 on branch 2.1.x (BUG: Fix error in printing timezone series) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 7dd2fb570c695..7b0c120c3bbdb 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -667,6 +667,7 @@ Datetimelike
- Bug in constructing a :class:`Timestamp` from a string representing a time without a date inferring an incorrect unit (:issue:`54097`)
- Bug in constructing a :class:`Timestamp` with ``ts_input=pd.NA`` raising ``TypeError`` (:issue:`45481`)
- Bug in parsing datetime strings with weekday but no day e.g. "2023 Sept Thu" incorrectly raising ``AttributeError`` instead of ``ValueError`` (:issue:`52659`)
+- Bug in the repr for :class:`Series` when dtype is a timezone aware datetime with non-nanosecond resolution raising ``OutOfBoundsDatetime`` (:issue:`54623`)
Timedelta
^^^^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index ff26abd5cc26c..2297f7945a264 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1830,8 +1830,8 @@ def get_format_datetime64_from_values(
class Datetime64TZFormatter(Datetime64Formatter):
def _format_strings(self) -> list[str]:
"""we by definition have a TZ"""
+ ido = is_dates_only(self.values)
values = self.values.astype(object)
- ido = is_dates_only(values)
formatter = self.formatter or get_format_datetime64(
ido, date_format=self.date_format
)
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 3b0dac21ef10c..7dfd35d5424b4 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -3320,6 +3320,14 @@ def format_func(x):
result = formatter.get_result()
assert result == ["10:10", "12:12"]
+ def test_datetime64formatter_tz_ms(self):
+ x = Series(
+ np.array(["2999-01-01", "2999-01-02", "NaT"], dtype="datetime64[ms]")
+ ).dt.tz_localize("US/Pacific")
+ result = fmt.Datetime64TZFormatter(x).get_result()
+ assert result[0].strip() == "2999-01-01 00:00:00-08:00"
+ assert result[1].strip() == "2999-01-02 00:00:00-08:00"
+
class TestNaTFormatting:
def test_repr(self):
| Backport PR #54625: BUG: Fix error in printing timezone series | https://api.github.com/repos/pandas-dev/pandas/pulls/54698 | 2023-08-22T23:55:34Z | 2023-08-23T02:42:30Z | 2023-08-23T02:42:30Z | 2023-08-23T02:42:30Z |
Backport PR #54694 on branch 2.1.x (MAINT: Remove `np.in1d` function calls) | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 06da747a450ee..0a9c1aad46f89 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -518,9 +518,9 @@ def isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:
return isin(np.asarray(comps_array), np.asarray(values))
# GH16012
- # Ensure np.in1d doesn't get object types or it *may* throw an exception
+ # Ensure np.isin doesn't get object types or it *may* throw an exception
# Albeit hashmap has O(1) look-up (vs. O(logn) in sorted array),
- # in1d is faster for small sizes
+ # isin is faster for small sizes
if (
len(comps_array) > _MINIMUM_COMP_ARR_LEN
and len(values) <= 26
@@ -531,10 +531,10 @@ def isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:
if isna(values).any():
def f(c, v):
- return np.logical_or(np.in1d(c, v), np.isnan(c))
+ return np.logical_or(np.isin(c, v).ravel(), np.isnan(c))
else:
- f = np.in1d
+ f = lambda a, b: np.isin(a, b).ravel()
else:
common = np_find_common_type(values.dtype, comps_array.dtype)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 3263dd73fe4dc..2fce6631afa9b 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1844,14 +1844,14 @@ def isin(self, values) -> npt.NDArray[np.bool_]:
# complex128 ndarray is much more performant.
left = self._combined.view("complex128")
right = values._combined.view("complex128")
- # error: Argument 1 to "in1d" has incompatible type
+ # error: Argument 1 to "isin" has incompatible type
# "Union[ExtensionArray, ndarray[Any, Any],
# ndarray[Any, dtype[Any]]]"; expected
# "Union[_SupportsArray[dtype[Any]],
# _NestedSequence[_SupportsArray[dtype[Any]]], bool,
# int, float, complex, str, bytes, _NestedSequence[
# Union[bool, int, float, complex, str, bytes]]]"
- return np.in1d(left, right) # type: ignore[arg-type]
+ return np.isin(left, right).ravel() # type: ignore[arg-type]
elif needs_i8_conversion(self.left.dtype) ^ needs_i8_conversion(
values.left.dtype
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 44c21bc284121..75cb7f7850013 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -283,11 +283,11 @@ def dates(
holiday_dates = self._apply_rule(dates)
if self.days_of_week is not None:
holiday_dates = holiday_dates[
- np.in1d(
+ np.isin(
# error: "DatetimeIndex" has no attribute "dayofweek"
holiday_dates.dayofweek, # type: ignore[attr-defined]
self.days_of_week,
- )
+ ).ravel()
]
if self.start_date is not None:
| Backport PR #54694: MAINT: Remove `np.in1d` function calls | https://api.github.com/repos/pandas-dev/pandas/pulls/54697 | 2023-08-22T22:51:35Z | 2023-08-23T00:15:40Z | 2023-08-23T00:15:40Z | 2023-08-23T00:15:40Z |
DOC: fix an example which raises an Error in v0.13.0.rst | diff --git a/doc/source/whatsnew/v0.13.0.rst b/doc/source/whatsnew/v0.13.0.rst
index c60a821968c0c..540e1d23814ff 100644
--- a/doc/source/whatsnew/v0.13.0.rst
+++ b/doc/source/whatsnew/v0.13.0.rst
@@ -537,7 +537,6 @@ Enhancements
is frequency conversion. See :ref:`the docs<timedeltas.timedeltas_convert>` for the docs.
.. ipython:: python
- :okexcept:
import datetime
td = pd.Series(pd.date_range('20130101', periods=4)) - pd.Series(
@@ -546,13 +545,41 @@ Enhancements
td[3] = np.nan
td
+ .. code-block:: ipython
+
# to days
- td / np.timedelta64(1, 'D')
- td.astype('timedelta64[D]')
+ In [63]: td / np.timedelta64(1, 'D')
+ Out[63]:
+ 0 31.000000
+ 1 31.000000
+ 2 31.003507
+ 3 NaN
+ dtype: float64
+
+ In [64]: td.astype('timedelta64[D]')
+ Out[64]:
+ 0 31.0
+ 1 31.0
+ 2 31.0
+ 3 NaN
+ dtype: float64
# to seconds
- td / np.timedelta64(1, 's')
- td.astype('timedelta64[s]')
+ In [65]: td / np.timedelta64(1, 's')
+ Out[65]:
+ 0 2678400.0
+ 1 2678400.0
+ 2 2678703.0
+ 3 NaN
+ dtype: float64
+
+ In [66]: td.astype('timedelta64[s]')
+ Out[66]:
+ 0 2678400.0
+ 1 2678400.0
+ 2 2678703.0
+ 3 NaN
+ dtype: float64
Dividing or multiplying a ``timedelta64[ns]`` Series by an integer or integer Series
| Fix an example in doc/source/whatsnew/v0.13.0.rst, which raises an Error (see https://pandas.pydata.org/docs/dev/whatsnew/v0.13.0.html#enhancements) | https://api.github.com/repos/pandas-dev/pandas/pulls/54696 | 2023-08-22T22:29:15Z | 2023-08-23T15:43:58Z | 2023-08-23T15:43:58Z | 2023-08-23T15:44:05Z |
MAINT: Remove `np.in1d` function calls | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 06da747a450ee..0a9c1aad46f89 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -518,9 +518,9 @@ def isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:
return isin(np.asarray(comps_array), np.asarray(values))
# GH16012
- # Ensure np.in1d doesn't get object types or it *may* throw an exception
+ # Ensure np.isin doesn't get object types or it *may* throw an exception
# Albeit hashmap has O(1) look-up (vs. O(logn) in sorted array),
- # in1d is faster for small sizes
+ # isin is faster for small sizes
if (
len(comps_array) > _MINIMUM_COMP_ARR_LEN
and len(values) <= 26
@@ -531,10 +531,10 @@ def isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:
if isna(values).any():
def f(c, v):
- return np.logical_or(np.in1d(c, v), np.isnan(c))
+ return np.logical_or(np.isin(c, v).ravel(), np.isnan(c))
else:
- f = np.in1d
+ f = lambda a, b: np.isin(a, b).ravel()
else:
common = np_find_common_type(values.dtype, comps_array.dtype)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 9c2e85c4df564..6875ab434a5f8 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1844,14 +1844,14 @@ def isin(self, values) -> npt.NDArray[np.bool_]:
# complex128 ndarray is much more performant.
left = self._combined.view("complex128")
right = values._combined.view("complex128")
- # error: Argument 1 to "in1d" has incompatible type
+ # error: Argument 1 to "isin" has incompatible type
# "Union[ExtensionArray, ndarray[Any, Any],
# ndarray[Any, dtype[Any]]]"; expected
# "Union[_SupportsArray[dtype[Any]],
# _NestedSequence[_SupportsArray[dtype[Any]]], bool,
# int, float, complex, str, bytes, _NestedSequence[
# Union[bool, int, float, complex, str, bytes]]]"
- return np.in1d(left, right) # type: ignore[arg-type]
+ return np.isin(left, right).ravel() # type: ignore[arg-type]
elif needs_i8_conversion(self.left.dtype) ^ needs_i8_conversion(
values.left.dtype
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 44c21bc284121..75cb7f7850013 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -283,11 +283,11 @@ def dates(
holiday_dates = self._apply_rule(dates)
if self.days_of_week is not None:
holiday_dates = holiday_dates[
- np.in1d(
+ np.isin(
# error: "DatetimeIndex" has no attribute "dayofweek"
holiday_dates.dayofweek, # type: ignore[attr-defined]
self.days_of_week,
- )
+ ).ravel()
]
if self.start_date is not None:
| Hi!
This PR changes `np.in1d` calls to `np.isin` as `np.in1d` is being made private in https://github.com/numpy/numpy/pull/24445.
I wasn't sure if arrays passed to it are surely 1d, therefore I called `.ravel()` to fully reproduce `in1d` behavior. If any of these lines actually operate on 1d arrays, then `isin` and `in1d` can be used interchangeably. | https://api.github.com/repos/pandas-dev/pandas/pulls/54694 | 2023-08-22T20:14:05Z | 2023-08-22T22:50:27Z | 2023-08-22T22:50:27Z | 2023-08-23T07:06:42Z |
Backport PR #54566 on branch 2.1.x (ENH: support Index.any/all with float, timedelta64 dtypes) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..d5c8a4974345c 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -265,6 +265,7 @@ Other enhancements
- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to ``lzma.LZMAFile`` (:issue:`52979`)
- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
+- :meth:`Index.all` and :meth:`Index.any` with floating dtypes and timedelta64 dtypes no longer raise ``TypeError``, matching the :meth:`Series.all` and :meth:`Series.any` behavior (:issue:`54566`)
- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 241b2de513a04..1f15a4ad84755 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7215,11 +7215,12 @@ def any(self, *args, **kwargs):
"""
nv.validate_any(args, kwargs)
self._maybe_disable_logical_methods("any")
- # error: Argument 1 to "any" has incompatible type "ArrayLike"; expected
- # "Union[Union[int, float, complex, str, bytes, generic], Sequence[Union[int,
- # float, complex, str, bytes, generic]], Sequence[Sequence[Any]],
- # _SupportsArray]"
- return np.any(self.values) # type: ignore[arg-type]
+ vals = self._values
+ if not isinstance(vals, np.ndarray):
+ # i.e. EA, call _reduce instead of "any" to get TypeError instead
+ # of AttributeError
+ return vals._reduce("any")
+ return np.any(vals)
def all(self, *args, **kwargs):
"""
@@ -7262,11 +7263,12 @@ def all(self, *args, **kwargs):
"""
nv.validate_all(args, kwargs)
self._maybe_disable_logical_methods("all")
- # error: Argument 1 to "all" has incompatible type "ArrayLike"; expected
- # "Union[Union[int, float, complex, str, bytes, generic], Sequence[Union[int,
- # float, complex, str, bytes, generic]], Sequence[Sequence[Any]],
- # _SupportsArray]"
- return np.all(self.values) # type: ignore[arg-type]
+ vals = self._values
+ if not isinstance(vals, np.ndarray):
+ # i.e. EA, call _reduce instead of "all" to get TypeError instead
+ # of AttributeError
+ return vals._reduce("all")
+ return np.all(vals)
@final
def _maybe_disable_logical_methods(self, opname: str_t) -> None:
@@ -7275,9 +7277,9 @@ def _maybe_disable_logical_methods(self, opname: str_t) -> None:
"""
if (
isinstance(self, ABCMultiIndex)
- or needs_i8_conversion(self.dtype)
- or isinstance(self.dtype, (IntervalDtype, CategoricalDtype))
- or is_float_dtype(self.dtype)
+ # TODO(3.0): PeriodArray and DatetimeArray any/all will raise,
+ # so checking needs_i8_conversion will be unnecessary
+ or (needs_i8_conversion(self.dtype) and self.dtype.kind != "m")
):
# This call will raise
make_invalid_op(opname)(self)
diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py
index 977c7da7d866f..8cd295802a5d1 100644
--- a/pandas/tests/indexes/numeric/test_numeric.py
+++ b/pandas/tests/indexes/numeric/test_numeric.py
@@ -227,6 +227,14 @@ def test_fillna_float64(self):
exp = Index([1.0, "obj", 3.0], name="x")
tm.assert_index_equal(idx.fillna("obj"), exp, exact=True)
+ def test_logical_compat(self, simple_index):
+ idx = simple_index
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
+
+ assert idx.all() == idx.to_series().all()
+ assert idx.any() == idx.to_series().any()
+
class TestNumericInt:
@pytest.fixture(params=[np.int64, np.int32, np.int16, np.int8, np.uint64])
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index ffa0b115e34fb..bc04c1c6612f4 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -692,7 +692,12 @@ def test_format_missing(self, vals, nulls_fixture):
@pytest.mark.parametrize("op", ["any", "all"])
def test_logical_compat(self, op, simple_index):
index = simple_index
- assert getattr(index, op)() == getattr(index.values, op)()
+ left = getattr(index, op)()
+ assert left == getattr(index.values, op)()
+ right = getattr(index.to_series(), op)()
+ # left might not match right exactly in e.g. string cases where the
+ # because we use np.any/all instead of .any/all
+ assert bool(left) == bool(right)
@pytest.mark.parametrize(
"index", ["string", "int64", "int32", "float64", "float32"], indirect=True
diff --git a/pandas/tests/indexes/test_old_base.py b/pandas/tests/indexes/test_old_base.py
index f8f5a543a9c19..79dc423f12a85 100644
--- a/pandas/tests/indexes/test_old_base.py
+++ b/pandas/tests/indexes/test_old_base.py
@@ -209,17 +209,25 @@ def test_numeric_compat(self, simple_index):
1 // idx
def test_logical_compat(self, simple_index):
- if (
- isinstance(simple_index, RangeIndex)
- or is_numeric_dtype(simple_index.dtype)
- or simple_index.dtype == object
- ):
+ if simple_index.dtype == object:
pytest.skip("Tested elsewhere.")
idx = simple_index
- with pytest.raises(TypeError, match="cannot perform all"):
- idx.all()
- with pytest.raises(TypeError, match="cannot perform any"):
- idx.any()
+ if idx.dtype.kind in "iufcbm":
+ assert idx.all() == idx._values.all()
+ assert idx.all() == idx.to_series().all()
+ assert idx.any() == idx._values.any()
+ assert idx.any() == idx.to_series().any()
+ else:
+ msg = "cannot perform (any|all)"
+ if isinstance(idx, IntervalIndex):
+ msg = (
+ r"'IntervalArray' with dtype interval\[.*\] does "
+ "not support reduction '(any|all)'"
+ )
+ with pytest.raises(TypeError, match=msg):
+ idx.all()
+ with pytest.raises(TypeError, match=msg):
+ idx.any()
def test_repr_roundtrip(self, simple_index):
if isinstance(simple_index, IntervalIndex):
| Backport PR #54566: ENH: support Index.any/all with float, timedelta64 dtypes | https://api.github.com/repos/pandas-dev/pandas/pulls/54693 | 2023-08-22T19:08:48Z | 2023-08-22T21:18:15Z | 2023-08-22T21:18:15Z | 2023-08-22T21:18:16Z |
Backport PR #54670 on branch 2.1.x (BUG: drop_duplicates raising for boolean arrow dtype with missing values) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..bff026d27dbce 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -626,6 +626,7 @@ Performance improvements
- Performance improvement in :meth:`DataFrame.transpose` when transposing a DataFrame with a single masked dtype, e.g. :class:`Int64` (:issue:`52836`)
- Performance improvement in :meth:`Series.add` for PyArrow string and binary dtypes (:issue:`53150`)
- Performance improvement in :meth:`Series.corr` and :meth:`Series.cov` for extension dtypes (:issue:`52502`)
+- Performance improvement in :meth:`Series.drop_duplicates` for ``ArrowDtype`` (:issue:`54667`).
- Performance improvement in :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, :meth:`DataFrame.bfill` with PyArrow dtypes (:issue:`53950`)
- Performance improvement in :meth:`Series.str.get_dummies` for PyArrow-backed strings (:issue:`53655`)
- Performance improvement in :meth:`Series.str.get` for PyArrow-backed strings (:issue:`53152`)
@@ -830,6 +831,7 @@ ExtensionArray
- Bug in :class:`~arrays.ArrowExtensionArray` converting pandas non-nanosecond temporal objects from non-zero values to zero values (:issue:`53171`)
- Bug in :meth:`Series.quantile` for PyArrow temporal types raising ``ArrowInvalid`` (:issue:`52678`)
- Bug in :meth:`Series.rank` returning wrong order for small values with ``Float64`` dtype (:issue:`52471`)
+- Bug in :meth:`Series.unique` for boolean ``ArrowDtype`` with ``NA`` values (:issue:`54667`)
- Bug in :meth:`~arrays.ArrowExtensionArray.__iter__` and :meth:`~arrays.ArrowExtensionArray.__getitem__` returning python datetime and timedelta objects for non-nano dtypes (:issue:`53326`)
- Bug where the :class:`DataFrame` repr would not work when a column had an :class:`ArrowDtype` with a ``pyarrow.ExtensionDtype`` (:issue:`54063`)
- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes (e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept PyArrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 14dee202a9d8d..06da747a450ee 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -55,6 +55,7 @@
)
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
BaseMaskedDtype,
CategoricalDtype,
ExtensionDtype,
@@ -996,9 +997,13 @@ def duplicated(
-------
duplicated : ndarray[bool]
"""
- if hasattr(values, "dtype") and isinstance(values.dtype, BaseMaskedDtype):
- values = cast("BaseMaskedArray", values)
- return htable.duplicated(values._data, keep=keep, mask=values._mask)
+ if hasattr(values, "dtype"):
+ if isinstance(values.dtype, ArrowDtype):
+ values = values._to_masked() # type: ignore[union-attr]
+
+ if isinstance(values.dtype, BaseMaskedDtype):
+ values = cast("BaseMaskedArray", values)
+ return htable.duplicated(values._data, keep=keep, mask=values._mask)
values = _ensure_data(values)
return htable.duplicated(values, keep=keep)
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 96c2e1ba6d9bb..324ab1204e16e 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -249,3 +249,10 @@ def test_drop_duplicates_ignore_index(self):
result = ser.drop_duplicates(ignore_index=True)
expected = Series([1, 2, 3])
tm.assert_series_equal(result, expected)
+
+ def test_duplicated_arrow_dtype(self):
+ pytest.importorskip("pyarrow")
+ ser = Series([True, False, None, False], dtype="bool[pyarrow]")
+ result = ser.drop_duplicates()
+ expected = Series([True, False, None], dtype="bool[pyarrow]")
+ tm.assert_series_equal(result, expected)
| Backport PR #54670: BUG: drop_duplicates raising for boolean arrow dtype with missing values | https://api.github.com/repos/pandas-dev/pandas/pulls/54692 | 2023-08-22T18:48:31Z | 2023-08-22T20:41:59Z | 2023-08-22T20:41:59Z | 2023-08-22T20:41:59Z |
Backport PR #54685 on branch 2.1.x (ENH: support integer bitwise ops in ArrowExtensionArray) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..9341237acfaa1 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -268,6 +268,7 @@ Other enhancements
- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
+- PyArrow-backed integer dtypes now support bitwise operations (:issue:`54495`)
.. ---------------------------------------------------------------------------
.. _whatsnew_210.api_breaking:
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 43320cf68cbec..48ff769f6c737 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -84,6 +84,15 @@
"rxor": lambda x, y: pc.xor(y, x),
}
+ ARROW_BIT_WISE_FUNCS = {
+ "and_": pc.bit_wise_and,
+ "rand_": lambda x, y: pc.bit_wise_and(y, x),
+ "or_": pc.bit_wise_or,
+ "ror_": lambda x, y: pc.bit_wise_or(y, x),
+ "xor": pc.bit_wise_xor,
+ "rxor": lambda x, y: pc.bit_wise_xor(y, x),
+ }
+
def cast_for_truediv(
arrow_array: pa.ChunkedArray, pa_object: pa.Array | pa.Scalar
) -> pa.ChunkedArray:
@@ -582,7 +591,11 @@ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
return self.to_numpy(dtype=dtype)
def __invert__(self) -> Self:
- return type(self)(pc.invert(self._pa_array))
+ # This is a bit wise op for integer types
+ if pa.types.is_integer(self._pa_array.type):
+ return type(self)(pc.bit_wise_not(self._pa_array))
+ else:
+ return type(self)(pc.invert(self._pa_array))
def __neg__(self) -> Self:
return type(self)(pc.negate_checked(self._pa_array))
@@ -657,7 +670,12 @@ def _evaluate_op_method(self, other, op, arrow_funcs):
return type(self)(result)
def _logical_method(self, other, op):
- return self._evaluate_op_method(other, op, ARROW_LOGICAL_FUNCS)
+ # For integer types `^`, `|`, `&` are bitwise operators and return
+ # integer types. Otherwise these are boolean ops.
+ if pa.types.is_integer(self._pa_array.type):
+ return self._evaluate_op_method(other, op, ARROW_BIT_WISE_FUNCS)
+ else:
+ return self._evaluate_op_method(other, op, ARROW_LOGICAL_FUNCS)
def _arith_method(self, other, op):
return self._evaluate_op_method(other, op, ARROW_ARITHMETIC_FUNCS)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index e748f320b3f09..a9b7a8c655032 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -753,7 +753,7 @@ def test_EA_types(self, engine, data, dtype_backend, request):
class TestBaseUnaryOps(base.BaseUnaryOpsTests):
def test_invert(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
- if not pa.types.is_boolean(pa_dtype):
+ if not (pa.types.is_boolean(pa_dtype) or pa.types.is_integer(pa_dtype)):
request.node.add_marker(
pytest.mark.xfail(
raises=pa.ArrowNotImplementedError,
@@ -1339,6 +1339,31 @@ def test_logical_masked_numpy(self, op, exp):
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("pa_type", tm.ALL_INT_PYARROW_DTYPES)
+def test_bitwise(pa_type):
+ # GH 54495
+ dtype = ArrowDtype(pa_type)
+ left = pd.Series([1, None, 3, 4], dtype=dtype)
+ right = pd.Series([None, 3, 5, 4], dtype=dtype)
+
+ result = left | right
+ expected = pd.Series([None, None, 3 | 5, 4 | 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = left & right
+ expected = pd.Series([None, None, 3 & 5, 4 & 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = left ^ right
+ expected = pd.Series([None, None, 3 ^ 5, 4 ^ 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = ~left
+ expected = ~(left.fillna(0).to_numpy())
+ expected = pd.Series(expected, dtype=dtype).mask(left.isnull())
+ tm.assert_series_equal(result, expected)
+
+
def test_arrowdtype_construct_from_string_type_with_unsupported_parameters():
with pytest.raises(NotImplementedError, match="Passing pyarrow type"):
ArrowDtype.construct_from_string("not_a_real_dype[s, tz=UTC][pyarrow]")
| Backport PR #54685: ENH: support integer bitwise ops in ArrowExtensionArray | https://api.github.com/repos/pandas-dev/pandas/pulls/54691 | 2023-08-22T18:23:01Z | 2023-08-22T20:41:48Z | 2023-08-22T20:41:48Z | 2023-08-22T20:41:48Z |
BUG: inaccurate Index._can_hold_na | diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 3263dd73fe4dc..9c2e85c4df564 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -764,7 +764,7 @@ def _cmp_method(self, other, op):
if self.closed != other.categories.closed:
return invalid_comparison(self, other, op)
- other = other.categories.take(
+ other = other.categories._values.take(
other.codes, allow_fill=True, fill_value=other.categories._na_value
)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 796aadf9e4061..2305c1cbb698e 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2172,11 +2172,6 @@ def _drop_level_numbers(self, levnums: list[int]):
@final
def _can_hold_na(self) -> bool:
if isinstance(self.dtype, ExtensionDtype):
- if isinstance(self.dtype, IntervalDtype):
- # FIXME(GH#45720): this is inaccurate for integer-backed
- # IntervalArray, but without it other.categories.take raises
- # in IntervalArray._cmp_method
- return True
return self.dtype._can_hold_na
if self.dtype.kind in "iub":
return False
| - [x] closes #45720 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54690 | 2023-08-22T16:04:38Z | 2023-08-22T19:04:36Z | 2023-08-22T19:04:36Z | 2023-08-22T19:21:11Z |
Backport PR #54510 on branch 2.1.x (Speed up StringDtype arrow implementation for merge) | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 6987a0ac7bf6b..c2cb9d643ca87 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -90,6 +90,7 @@
ExtensionArray,
)
from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
+from pandas.core.arrays.string_ import StringDtype
import pandas.core.common as com
from pandas.core.construction import (
ensure_wrapped_if_datetimelike,
@@ -2399,21 +2400,9 @@ def _factorize_keys(
rk = ensure_int64(rk.codes)
elif isinstance(lk, ExtensionArray) and lk.dtype == rk.dtype:
- if not isinstance(lk, BaseMaskedArray) and not (
- # exclude arrow dtypes that would get cast to object
- isinstance(lk.dtype, ArrowDtype)
- and (
- is_numeric_dtype(lk.dtype.numpy_dtype)
- or is_string_dtype(lk.dtype)
- and not sort
- )
+ if (isinstance(lk.dtype, ArrowDtype) and is_string_dtype(lk.dtype)) or (
+ isinstance(lk.dtype, StringDtype) and lk.dtype.storage == "pyarrow"
):
- lk, _ = lk._values_for_factorize()
-
- # error: Item "ndarray" of "Union[Any, ndarray]" has no attribute
- # "_values_for_factorize"
- rk, _ = rk._values_for_factorize() # type: ignore[union-attr]
- elif isinstance(lk.dtype, ArrowDtype) and is_string_dtype(lk.dtype):
import pyarrow as pa
import pyarrow.compute as pc
@@ -2436,6 +2425,21 @@ def _factorize_keys(
return rlab, llab, count
return llab, rlab, count
+ if not isinstance(lk, BaseMaskedArray) and not (
+ # exclude arrow dtypes that would get cast to object
+ isinstance(lk.dtype, ArrowDtype)
+ and (
+ is_numeric_dtype(lk.dtype.numpy_dtype)
+ or is_string_dtype(lk.dtype)
+ and not sort
+ )
+ ):
+ lk, _ = lk._values_for_factorize()
+
+ # error: Item "ndarray" of "Union[Any, ndarray]" has no attribute
+ # "_values_for_factorize"
+ rk, _ = rk._values_for_factorize() # type: ignore[union-attr]
+
if needs_i8_conversion(lk.dtype) and lk.dtype == rk.dtype:
# GH#23917 TODO: Needs tests for non-matching dtypes
# GH#23917 TODO: needs tests for case where lk is integer-dtype
| Backport PR #54510: Speed up StringDtype arrow implementation for merge | https://api.github.com/repos/pandas-dev/pandas/pulls/54689 | 2023-08-22T15:05:52Z | 2023-08-22T18:11:28Z | 2023-08-22T18:11:27Z | 2023-08-22T18:11:28Z |
Backport PR #54678 on branch 2.1.x (COMPAT: Workaround invalid PyArrow duration conversion) | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 3c65e6b4879e2..43320cf68cbec 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -952,6 +952,9 @@ def convert_fill_value(value, pa_type, dtype):
return value
if isinstance(value, (pa.Scalar, pa.Array, pa.ChunkedArray)):
return value
+ if isinstance(value, Timedelta) and value.unit in ("s", "ms"):
+ # Workaround https://github.com/apache/arrow/issues/37291
+ value = value.to_numpy()
if is_array_like(value):
pa_box = pa.array
else:
| Backport PR #54678: COMPAT: Workaround invalid PyArrow duration conversion | https://api.github.com/repos/pandas-dev/pandas/pulls/54688 | 2023-08-22T13:37:34Z | 2023-08-22T18:11:14Z | 2023-08-22T18:11:14Z | 2023-08-22T18:11:14Z |
BUG: Fix astype str issue 54654 | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 9eb5bbc8f07d5..3abe96e5241f1 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -326,6 +326,7 @@ Numeric
Conversion
^^^^^^^^^^
+- Bug in :func:`astype` when called with ``str`` on unpickled array - the array might change in-place (:issue:`54654`)
- Bug in :meth:`Series.convert_dtypes` not converting all NA column to ``null[pyarrow]`` (:issue:`55346`)
-
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 38b34b4bb853c..bd6534494d973 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -792,7 +792,8 @@ cpdef ndarray[object] ensure_string_array(
result = np.asarray(arr, dtype="object")
- if copy and result is arr:
+ if copy and (result is arr or np.shares_memory(arr, result)):
+ # GH#54654
result = result.copy()
elif not copy and result is arr:
already_copied = False
diff --git a/pandas/tests/copy_view/test_astype.py b/pandas/tests/copy_view/test_astype.py
index 3d5556bdd2823..d462ce3d3187d 100644
--- a/pandas/tests/copy_view/test_astype.py
+++ b/pandas/tests/copy_view/test_astype.py
@@ -1,3 +1,5 @@
+import pickle
+
import numpy as np
import pytest
@@ -130,6 +132,15 @@ def test_astype_string_and_object_update_original(
tm.assert_frame_equal(df2, df_orig)
+def test_astype_string_copy_on_pickle_roundrip():
+ # https://github.com/pandas-dev/pandas/issues/54654
+ # ensure_string_array may alter array inplace
+ base = Series(np.array([(1, 2), None, 1], dtype="object"))
+ base_copy = pickle.loads(pickle.dumps(base))
+ base_copy.astype(str)
+ tm.assert_series_equal(base, base_copy)
+
+
def test_astype_dict_dtypes(using_copy_on_write):
df = DataFrame(
{"a": [1, 2, 3], "b": [4, 5, 6], "c": Series([1.5, 1.5, 1.5], dtype="float64")}
| - [x] closes #54654
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54687 | 2023-08-22T11:15:16Z | 2023-10-29T17:22:56Z | 2023-10-29T17:22:56Z | 2023-10-29T17:23:05Z |
ENH: support integer bitwise ops in ArrowExtensionArray | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..9341237acfaa1 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -268,6 +268,7 @@ Other enhancements
- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
+- PyArrow-backed integer dtypes now support bitwise operations (:issue:`54495`)
.. ---------------------------------------------------------------------------
.. _whatsnew_210.api_breaking:
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 3c65e6b4879e2..e0c6848ee9189 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -84,6 +84,15 @@
"rxor": lambda x, y: pc.xor(y, x),
}
+ ARROW_BIT_WISE_FUNCS = {
+ "and_": pc.bit_wise_and,
+ "rand_": lambda x, y: pc.bit_wise_and(y, x),
+ "or_": pc.bit_wise_or,
+ "ror_": lambda x, y: pc.bit_wise_or(y, x),
+ "xor": pc.bit_wise_xor,
+ "rxor": lambda x, y: pc.bit_wise_xor(y, x),
+ }
+
def cast_for_truediv(
arrow_array: pa.ChunkedArray, pa_object: pa.Array | pa.Scalar
) -> pa.ChunkedArray:
@@ -582,7 +591,11 @@ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
return self.to_numpy(dtype=dtype)
def __invert__(self) -> Self:
- return type(self)(pc.invert(self._pa_array))
+ # This is a bit wise op for integer types
+ if pa.types.is_integer(self._pa_array.type):
+ return type(self)(pc.bit_wise_not(self._pa_array))
+ else:
+ return type(self)(pc.invert(self._pa_array))
def __neg__(self) -> Self:
return type(self)(pc.negate_checked(self._pa_array))
@@ -657,7 +670,12 @@ def _evaluate_op_method(self, other, op, arrow_funcs):
return type(self)(result)
def _logical_method(self, other, op):
- return self._evaluate_op_method(other, op, ARROW_LOGICAL_FUNCS)
+ # For integer types `^`, `|`, `&` are bitwise operators and return
+ # integer types. Otherwise these are boolean ops.
+ if pa.types.is_integer(self._pa_array.type):
+ return self._evaluate_op_method(other, op, ARROW_BIT_WISE_FUNCS)
+ else:
+ return self._evaluate_op_method(other, op, ARROW_LOGICAL_FUNCS)
def _arith_method(self, other, op):
return self._evaluate_op_method(other, op, ARROW_ARITHMETIC_FUNCS)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 5955cfc2ef5e4..f9c420607812c 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -724,7 +724,7 @@ def test_EA_types(self, engine, data, dtype_backend, request):
def test_invert(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
- if not pa.types.is_boolean(pa_dtype):
+ if not (pa.types.is_boolean(pa_dtype) or pa.types.is_integer(pa_dtype)):
request.node.add_marker(
pytest.mark.xfail(
raises=pa.ArrowNotImplementedError,
@@ -1287,6 +1287,31 @@ def test_logical_masked_numpy(self, op, exp):
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("pa_type", tm.ALL_INT_PYARROW_DTYPES)
+def test_bitwise(pa_type):
+ # GH 54495
+ dtype = ArrowDtype(pa_type)
+ left = pd.Series([1, None, 3, 4], dtype=dtype)
+ right = pd.Series([None, 3, 5, 4], dtype=dtype)
+
+ result = left | right
+ expected = pd.Series([None, None, 3 | 5, 4 | 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = left & right
+ expected = pd.Series([None, None, 3 & 5, 4 & 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = left ^ right
+ expected = pd.Series([None, None, 3 ^ 5, 4 ^ 4], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = ~left
+ expected = ~(left.fillna(0).to_numpy())
+ expected = pd.Series(expected, dtype=dtype).mask(left.isnull())
+ tm.assert_series_equal(result, expected)
+
+
def test_arrowdtype_construct_from_string_type_with_unsupported_parameters():
with pytest.raises(NotImplementedError, match="Passing pyarrow type"):
ArrowDtype.construct_from_string("not_a_real_dype[s, tz=UTC][pyarrow]")
| - [x] closes #54495
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.1.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54685 | 2023-08-22T10:34:01Z | 2023-08-22T18:22:53Z | 2023-08-22T18:22:53Z | 2023-09-06T00:54:08Z |
DOC: Fix inacurate documentation info (#54547) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index cf60717011222..9777ced2b209d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -12340,9 +12340,6 @@ def last_valid_index(self) -> Hashable | None:
Axis for the function to be applied on.
For `Series` this parameter is unused and defaults to 0.
- For DataFrames, specifying ``axis=None`` will apply the aggregation
- across both axes.
-
.. versionadded:: 2.0.0
skipna : bool, default True
| - [ ] closes #54547
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54684 | 2023-08-22T10:11:55Z | 2023-08-22T18:24:46Z | null | 2023-08-22T19:00:27Z |
DOC: fix an example which raised a KeyError in v0.11.0.rst | diff --git a/doc/source/whatsnew/v0.11.0.rst b/doc/source/whatsnew/v0.11.0.rst
index aa83e7933a444..f05cbc7f07d7d 100644
--- a/doc/source/whatsnew/v0.11.0.rst
+++ b/doc/source/whatsnew/v0.11.0.rst
@@ -367,15 +367,27 @@ Enhancements
- You can now select with a string from a DataFrame with a datelike index, in a similar way to a Series (:issue:`3070`)
- .. ipython:: python
- :okexcept:
+ .. code-block:: ipython
- idx = pd.date_range("2001-10-1", periods=5, freq='M')
- ts = pd.Series(np.random.rand(len(idx)), index=idx)
- ts['2001']
+ In [30]: idx = pd.date_range("2001-10-1", periods=5, freq='M')
- df = pd.DataFrame({'A': ts})
- df['2001']
+ In [31]: ts = pd.Series(np.random.rand(len(idx)), index=idx)
+
+ In [32]: ts['2001']
+ Out[32]:
+ 2001-10-31 0.117967
+ 2001-11-30 0.702184
+ 2001-12-31 0.414034
+ Freq: M, dtype: float64
+
+ In [33]: df = pd.DataFrame({'A': ts})
+
+ In [34]: df['2001']
+ Out[34]:
+ A
+ 2001-10-31 0.117967
+ 2001-11-30 0.702184
+ 2001-12-31 0.414034
- ``Squeeze`` to possibly remove length 1 dimensions from an object.
| Fix an example in doc/source/whatsnew/v0.11.0.rst, which raises a KeyError (see https://pandas.pydata.org/docs/whatsnew/v0.11.0.html#enhancements) | https://api.github.com/repos/pandas-dev/pandas/pulls/54683 | 2023-08-22T08:46:01Z | 2023-08-22T12:39:51Z | 2023-08-22T12:39:51Z | 2023-08-22T12:42:18Z |
Bug Fix for documentation: 54547 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index cf60717011222..9777ced2b209d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -12340,9 +12340,6 @@ def last_valid_index(self) -> Hashable | None:
Axis for the function to be applied on.
For `Series` this parameter is unused and defaults to 0.
- For DataFrames, specifying ``axis=None`` will apply the aggregation
- across both axes.
-
.. versionadded:: 2.0.0
skipna : bool, default True
| Updating documents based on suggestions.
Bug:
https://github.com/pandas-dev/pandas/issues/54547
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54681 | 2023-08-22T05:44:38Z | 2023-09-18T17:11:38Z | null | 2023-09-18T17:11:39Z |
DOC: add url variable for link attribution in HTML section of io.rts | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 93294d3cbdcfe..fb3e1ba32228c 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -2448,7 +2448,7 @@ Read a URL with no options:
.. code-block:: ipython
- In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
+ In [320]: url = "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
| The url variable was not created. I fixed. | https://api.github.com/repos/pandas-dev/pandas/pulls/54680 | 2023-08-22T01:10:06Z | 2023-08-22T18:34:58Z | 2023-08-22T18:34:57Z | 2023-08-22T19:09:40Z |
Incorrect reading of CSV containing large integers Issue#52505 | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 6fdffb4d78341..138be2457d718 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -139,7 +139,7 @@ Timezones
Numeric
^^^^^^^
--
+- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
-
Conversion
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index 71bfb00a95b50..bb6bcd3c4d6a0 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -223,5 +223,8 @@ def read(self) -> DataFrame:
elif using_pyarrow_string_dtype():
frame = table.to_pandas(types_mapper=arrow_string_types_mapper())
else:
- frame = table.to_pandas()
+ if isinstance(self.kwds.get("dtype"), dict):
+ frame = table.to_pandas(types_mapper=self.kwds["dtype"].get)
+ else:
+ frame = table.to_pandas()
return self._finalize_pandas_output(frame)
diff --git a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
index 1c0f0939029ff..f797f6392d56c 100644
--- a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
+++ b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
@@ -558,3 +558,20 @@ def test_string_inference(all_parsers):
columns=pd.Index(["a", "b"], dtype=dtype),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_accurate_parsing_of_large_integers(all_parsers):
+ # GH#52505
+ data = """SYMBOL,MOMENT,ID,ID_DEAL
+AAPL,20230301181139587,1925036343869802844,
+AAPL,20230301181139587,2023552585717889863,2023552585717263358
+NVDA,20230301181139587,2023552585717889863,2023552585717263359
+AMC,20230301181139587,2023552585717889863,2023552585717263360
+AMZN,20230301181139587,2023552585717889759,2023552585717263360
+MSFT,20230301181139587,2023552585717889863,2023552585717263361
+NVDA,20230301181139587,2023552585717889827,2023552585717263361"""
+ orders = pd.read_csv(StringIO(data), dtype={"ID_DEAL": pd.Int64Dtype()})
+ assert len(orders.loc[orders["ID_DEAL"] == 2023552585717263358, "ID_DEAL"]) == 1
+ assert len(orders.loc[orders["ID_DEAL"] == 2023552585717263359, "ID_DEAL"]) == 1
+ assert len(orders.loc[orders["ID_DEAL"] == 2023552585717263360, "ID_DEAL"]) == 2
+ assert len(orders.loc[orders["ID_DEAL"] == 2023552585717263361, "ID_DEAL"]) == 2
| - [ ] closes #52505
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54679 | 2023-08-21T23:20:25Z | 2023-08-24T16:25:53Z | 2023-08-24T16:25:53Z | 2023-08-24T16:26:01Z |
COMPAT: Workaround invalid PyArrow duration conversion | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 3c65e6b4879e2..43320cf68cbec 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -952,6 +952,9 @@ def convert_fill_value(value, pa_type, dtype):
return value
if isinstance(value, (pa.Scalar, pa.Array, pa.ChunkedArray)):
return value
+ if isinstance(value, Timedelta) and value.unit in ("s", "ms"):
+ # Workaround https://github.com/apache/arrow/issues/37291
+ value = value.to_numpy()
if is_array_like(value):
pa_box = pa.array
else:
| - [x] closes #54650 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
I think https://github.com/apache/arrow/pull/37064 caused a `pc.fill_null` code path to accept duration types exposing a pyarrow bug when converting a `pd.Timedelta` to `pa.scalar` https://github.com/apache/arrow/issues/37291
| https://api.github.com/repos/pandas-dev/pandas/pulls/54678 | 2023-08-21T23:03:04Z | 2023-08-22T13:36:22Z | 2023-08-22T13:36:22Z | 2023-08-22T16:39:32Z |
REF: use single-test-class for datetimetz, period tests | diff --git a/pandas/tests/extension/base/accumulate.py b/pandas/tests/extension/base/accumulate.py
index 4648f66112e80..776ff80cd6e17 100644
--- a/pandas/tests/extension/base/accumulate.py
+++ b/pandas/tests/extension/base/accumulate.py
@@ -16,7 +16,12 @@ def _supports_accumulation(self, ser: pd.Series, op_name: str) -> bool:
return False
def check_accumulate(self, ser: pd.Series, op_name: str, skipna: bool):
- alt = ser.astype("float64")
+ try:
+ alt = ser.astype("float64")
+ except TypeError:
+ # e.g. Period can't be cast to float64
+ alt = ser.astype(object)
+
result = getattr(ser, op_name)(skipna=skipna)
if result.dtype == pd.Float32Dtype() and op_name == "cumprod" and skipna:
@@ -37,5 +42,6 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna):
if self._supports_accumulation(ser, op_name):
self.check_accumulate(ser, op_name, skipna)
else:
- with pytest.raises(NotImplementedError):
+ with pytest.raises((NotImplementedError, TypeError)):
+ # TODO: require TypeError for things that will _never_ work?
getattr(ser, op_name)(skipna=skipna)
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 97773d0d40a57..5a7b15ddb01ce 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -82,78 +82,63 @@ def cmp(a, b):
# ----------------------------------------------------------------------------
-class BaseDatetimeTests:
- pass
+class TestDatetimeArray(base.ExtensionTests):
+ def _get_expected_exception(self, op_name, obj, other):
+ if op_name in ["__sub__", "__rsub__"]:
+ return None
+ return super()._get_expected_exception(op_name, obj, other)
+ def _supports_accumulation(self, ser, op_name: str) -> bool:
+ return op_name in ["cummin", "cummax"]
-# ----------------------------------------------------------------------------
-# Tests
-class TestDatetimeDtype(BaseDatetimeTests, base.BaseDtypeTests):
- pass
+ def _supports_reduction(self, obj, op_name: str) -> bool:
+ return op_name in ["min", "max", "median", "mean", "std", "any", "all"]
+ @pytest.mark.parametrize("skipna", [True, False])
+ def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
+ meth = all_boolean_reductions
+ msg = f"'{meth}' with datetime64 dtypes is deprecated and will raise in"
+ with tm.assert_produces_warning(
+ FutureWarning, match=msg, check_stacklevel=False
+ ):
+ super().test_reduce_series_boolean(data, all_boolean_reductions, skipna)
-class TestConstructors(BaseDatetimeTests, base.BaseConstructorsTests):
def test_series_constructor(self, data):
# Series construction drops any .freq attr
data = data._with_freq(None)
super().test_series_constructor(data)
-
-class TestGetitem(BaseDatetimeTests, base.BaseGetitemTests):
- pass
-
-
-class TestIndex(base.BaseIndexTests):
- pass
-
-
-class TestMethods(BaseDatetimeTests, base.BaseMethodsTests):
@pytest.mark.parametrize("na_action", [None, "ignore"])
def test_map(self, data, na_action):
result = data.map(lambda x: x, na_action=na_action)
tm.assert_extension_array_equal(result, data)
-
-class TestInterface(BaseDatetimeTests, base.BaseInterfaceTests):
- pass
-
-
-class TestArithmeticOps(BaseDatetimeTests, base.BaseArithmeticOpsTests):
- implements = {"__sub__", "__rsub__"}
-
- def _get_expected_exception(self, op_name, obj, other):
- if op_name in self.implements:
- return None
- return super()._get_expected_exception(op_name, obj, other)
-
-
-class TestCasting(BaseDatetimeTests, base.BaseCastingTests):
- pass
-
-
-class TestComparisonOps(BaseDatetimeTests, base.BaseComparisonOpsTests):
- pass
-
-
-class TestMissing(BaseDatetimeTests, base.BaseMissingTests):
- pass
-
-
-class TestReshaping(BaseDatetimeTests, base.BaseReshapingTests):
- pass
-
-
-class TestSetitem(BaseDatetimeTests, base.BaseSetitemTests):
- pass
-
-
-class TestGroupby(BaseDatetimeTests, base.BaseGroupbyTests):
- pass
-
-
-class TestPrinting(BaseDatetimeTests, base.BasePrintingTests):
- pass
-
-
-class Test2DCompat(BaseDatetimeTests, base.NDArrayBacked2DTests):
+ @pytest.mark.parametrize("engine", ["c", "python"])
+ def test_EA_types(self, engine, data):
+ expected_msg = r".*must implement _from_sequence_of_strings.*"
+ with pytest.raises(NotImplementedError, match=expected_msg):
+ super().test_EA_types(engine, data)
+
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
+ if op_name in ["median", "mean", "std"]:
+ alt = ser.astype("int64")
+
+ res_op = getattr(ser, op_name)
+ exp_op = getattr(alt, op_name)
+ result = res_op(skipna=skipna)
+ expected = exp_op(skipna=skipna)
+ if op_name in ["mean", "median"]:
+ # error: Item "dtype[Any]" of "dtype[Any] | ExtensionDtype"
+ # has no attribute "tz"
+ tz = ser.dtype.tz # type: ignore[union-attr]
+ expected = pd.Timestamp(expected, tz=tz)
+ else:
+ expected = pd.Timedelta(expected)
+ tm.assert_almost_equal(result, expected)
+
+ else:
+ return super().check_reduce(ser, op_name, skipna)
+
+
+class Test2DCompat(base.NDArrayBacked2DTests):
pass
diff --git a/pandas/tests/extension/test_period.py b/pandas/tests/extension/test_period.py
index 63297c20daa97..2d1d213322bac 100644
--- a/pandas/tests/extension/test_period.py
+++ b/pandas/tests/extension/test_period.py
@@ -13,10 +13,17 @@
be added to the array-specific tests in `pandas/tests/arrays/`.
"""
+from __future__ import annotations
+
+from typing import TYPE_CHECKING
+
import numpy as np
import pytest
-from pandas._libs import iNaT
+from pandas._libs import (
+ Period,
+ iNaT,
+)
from pandas.compat import is_platform_windows
from pandas.compat.numpy import np_version_gte1p24
@@ -26,6 +33,9 @@
from pandas.core.arrays import PeriodArray
from pandas.tests.extension import base
+if TYPE_CHECKING:
+ import pandas as pd
+
@pytest.fixture(params=["D", "2D"])
def dtype(request):
@@ -61,27 +71,36 @@ def data_for_grouping(dtype):
return PeriodArray([B, B, NA, NA, A, A, B, C], dtype=dtype)
-class BasePeriodTests:
- pass
-
-
-class TestPeriodDtype(BasePeriodTests, base.BaseDtypeTests):
- pass
+class TestPeriodArray(base.ExtensionTests):
+ def _get_expected_exception(self, op_name, obj, other):
+ if op_name in ("__sub__", "__rsub__"):
+ return None
+ return super()._get_expected_exception(op_name, obj, other)
+ def _supports_accumulation(self, ser, op_name: str) -> bool:
+ return op_name in ["cummin", "cummax"]
-class TestConstructors(BasePeriodTests, base.BaseConstructorsTests):
- pass
+ def _supports_reduction(self, obj, op_name: str) -> bool:
+ return op_name in ["min", "max", "median"]
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
+ if op_name == "median":
+ res_op = getattr(ser, op_name)
-class TestGetitem(BasePeriodTests, base.BaseGetitemTests):
- pass
+ alt = ser.astype("int64")
+ exp_op = getattr(alt, op_name)
+ result = res_op(skipna=skipna)
+ expected = exp_op(skipna=skipna)
+ # error: Item "dtype[Any]" of "dtype[Any] | ExtensionDtype" has no
+ # attribute "freq"
+ freq = ser.dtype.freq # type: ignore[union-attr]
+ expected = Period._from_ordinal(int(expected), freq=freq)
+ tm.assert_almost_equal(result, expected)
-class TestIndex(base.BaseIndexTests):
- pass
-
+ else:
+ return super().check_reduce(ser, op_name, skipna)
-class TestMethods(BasePeriodTests, base.BaseMethodsTests):
@pytest.mark.parametrize("periods", [1, -2])
def test_diff(self, data, periods):
if is_platform_windows() and np_version_gte1p24:
@@ -96,48 +115,5 @@ def test_map(self, data, na_action):
tm.assert_extension_array_equal(result, data)
-class TestInterface(BasePeriodTests, base.BaseInterfaceTests):
- pass
-
-
-class TestArithmeticOps(BasePeriodTests, base.BaseArithmeticOpsTests):
- def _get_expected_exception(self, op_name, obj, other):
- if op_name in ("__sub__", "__rsub__"):
- return None
- return super()._get_expected_exception(op_name, obj, other)
-
-
-class TestCasting(BasePeriodTests, base.BaseCastingTests):
- pass
-
-
-class TestComparisonOps(BasePeriodTests, base.BaseComparisonOpsTests):
- pass
-
-
-class TestMissing(BasePeriodTests, base.BaseMissingTests):
- pass
-
-
-class TestReshaping(BasePeriodTests, base.BaseReshapingTests):
- pass
-
-
-class TestSetitem(BasePeriodTests, base.BaseSetitemTests):
- pass
-
-
-class TestGroupby(BasePeriodTests, base.BaseGroupbyTests):
- pass
-
-
-class TestPrinting(BasePeriodTests, base.BasePrintingTests):
- pass
-
-
-class TestParsing(BasePeriodTests, base.BaseParsingTests):
- pass
-
-
-class Test2DCompat(BasePeriodTests, base.NDArrayBacked2DTests):
+class Test2DCompat(base.NDArrayBacked2DTests):
pass
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54676 | 2023-08-21T22:15:38Z | 2023-08-22T22:47:44Z | 2023-08-22T22:47:44Z | 2023-08-22T23:13:03Z |
Smcbeth/series write parquet | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 564c799d7ab66..51e5e43c65645 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1874,6 +1874,125 @@ def to_markdown(
buf, mode=mode, index=index, storage_options=storage_options, **kwargs
)
+ @overload
+ def to_parquet(
+ self,
+ path: None = ...,
+ engine: Literal["auto", "pyarrow", "fastparquet"] = ...,
+ compression: str | None = ...,
+ index: bool | None = ...,
+ partition_cols: list[str] | None = ...,
+ storage_options: StorageOptions = ...,
+ **kwargs,
+ ) -> bytes:
+ ...
+
+ @overload
+ def to_parquet(
+ self,
+ path: FilePath | WriteBuffer[bytes],
+ engine: Literal["auto", "pyarrow", "fastparquet"] = ...,
+ compression: str | None = ...,
+ index: bool | None = ...,
+ partition_cols: list[str] | None = ...,
+ storage_options: StorageOptions = ...,
+ **kwargs,
+ ) -> None:
+ ...
+
+ @doc(storage_options=_shared_docs["storage_options"])
+ def to_parquet(
+ self,
+ path: FilePath | WriteBuffer[bytes] | None = None,
+ engine: Literal["auto", "pyarrow", "fastparquet"] = "auto",
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ partition_cols: list[str] | None = None,
+ storage_options: StorageOptions | None = None,
+ **kwargs,
+ ) -> bytes | None:
+ """
+ Write a Series to the binary parquet format.
+ This function writes the series as a `parquet file
+ <https://parquet.apache.org/>`_. You can choose different parquet
+ backends, and have the option of compression. See
+ :ref:`the user guide <io.parquet>` for more details.
+ Parameters
+ ----------
+ path : str, path object, file-like object, or None, default None
+ String, path object (implementing ``os.PathLike[str]``), or file-like
+ object implementing a binary ``write()`` function. If None, the result is
+ returned as bytes. If a string or path, it will be used as Root Directory
+ path when writing a partitioned dataset.
+ .. versionchanged:: 1.2.0
+ Previously this was "fname"
+ engine : {{'auto', 'pyarrow', 'fastparquet'}}, default 'auto'
+ Parquet library to use. If 'auto', then the option
+ ``io.parquet.engine`` is used. The default ``io.parquet.engine``
+ behavior is to try 'pyarrow', falling back to 'fastparquet' if
+ 'pyarrow' is unavailable.
+ compression : str or None, default 'snappy'
+ Name of the compression to use. Use ``None`` for no compression.
+ Supported options: 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'.
+ index : bool, default None
+ If ``True``, include the dataframe's index(es) in the file output.
+ If ``False``, they will not be written to the file.
+ If ``None``, similar to ``True`` the dataframe's index(es)
+ will be saved. However, instead of being saved as values,
+ the RangeIndex will be stored as a range in the metadata so it
+ doesn't require much space and is faster. Other indexes will
+ be included as columns in the file output.
+ partition_cols : list, optional, default None
+ Column names by which to partition the dataset.
+ Columns are partitioned in the order they are given.
+ Must be None if path is not a string.
+ {storage_options}
+ .. versionadded:: 1.2.0
+ **kwargs
+ Additional arguments passed to the parquet library. See
+ :ref:`pandas io <io.parquet>` for more details.
+ Returns
+ -------
+ bytes if no path argument is provided else None
+ See Also
+ --------
+ To do: Add other io methods to Series
+ Notes
+ -----
+ This function requires either the `fastparquet
+ <https://pypi.org/project/fastparquet>`_ or `pyarrow
+ <https://arrow.apache.org/docs/python/>`_ library.
+ Examples
+ --------
+ >>> df = pd.DataFrame(data={{'col1': [1, 2], 'col2': [3, 4]}})
+ >>> df.to_parquet('df.parquet.gzip',
+ ... compression='gzip') # doctest: +SKIP
+ >>> pd.read_parquet('df.parquet.gzip') # doctest: +SKIP
+ col1 col2
+ 0 1 3
+ 1 2 4
+ If you want to get a buffer to the parquet content you can use a io.BytesIO
+ object, as long as you don't use partition_cols, which creates multiple files.
+ >>> import io
+ >>> f = io.BytesIO()
+ >>> df.to_parquet(f)
+ >>> f.seek(0)
+ 0
+ >>> content = f.read()
+ """
+ from pandas.io.parquet import to_parquet
+
+ return to_parquet(
+ self,
+ path,
+ engine,
+ compression=compression,
+ index=index,
+ partition_cols=partition_cols,
+ storage_options=storage_options,
+ **kwargs,
+ )
+
# ----------------------------------------------------------------------
def items(self) -> Iterable[tuple[Hashable, Any]]:
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index f51b98a929440..0db4bed385f17 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -24,6 +24,7 @@
import pandas as pd
from pandas import (
DataFrame,
+ Series,
get_option,
)
from pandas.core.shared_docs import _shared_docs
@@ -146,9 +147,9 @@ def _get_path_or_handle(
class BaseImpl:
@staticmethod
- def validate_dataframe(df: DataFrame) -> None:
- if not isinstance(df, DataFrame):
- raise ValueError("to_parquet only supports IO with DataFrames")
+ def validate_data(data: DataFrame | Series) -> None:
+ if not isinstance(data, DataFrame) and not isinstance(data, Series):
+ raise ValueError("to_parquet only supports IO with DataFrames and Series")
def write(self, df: DataFrame, path, compression, **kwargs):
raise AbstractMethodError(self)
@@ -171,7 +172,7 @@ def __init__(self) -> None:
def write(
self,
- df: DataFrame,
+ data: DataFrame | Series,
path: FilePath | WriteBuffer[bytes],
compression: str | None = "snappy",
index: bool | None = None,
@@ -180,18 +181,20 @@ def write(
filesystem=None,
**kwargs,
) -> None:
- self.validate_dataframe(df)
+ self.validate_data(data)
from_pandas_kwargs: dict[str, Any] = {"schema": kwargs.pop("schema", None)}
if index is not None:
from_pandas_kwargs["preserve_index"] = index
+ if isinstance(data, Series):
+ table = self.api.Table.from_pandas(data.to_frame(), **from_pandas_kwargs)
+ else:
+ table = self.api.Table.from_pandas(data, **from_pandas_kwargs)
- table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
-
- if df.attrs:
- df_metadata = {"PANDAS_ATTRS": json.dumps(df.attrs)}
+ if data.attrs:
+ data_metadata = {"PANDAS_ATTRS": json.dumps(data.attrs)}
existing_metadata = table.schema.metadata
- merged_metadata = {**existing_metadata, **df_metadata}
+ merged_metadata = {**existing_metadata, **data_metadata}
table = table.replace_schema_metadata(merged_metadata)
path_or_handle, handles, filesystem = _get_path_or_handle(
@@ -302,7 +305,7 @@ def __init__(self) -> None:
def write(
self,
- df: DataFrame,
+ data: DataFrame | Series,
path,
compression: Literal["snappy", "gzip", "brotli"] | None = "snappy",
index=None,
@@ -311,7 +314,7 @@ def write(
filesystem=None,
**kwargs,
) -> None:
- self.validate_dataframe(df)
+ self.validate_data(data)
if "partition_on" in kwargs and partition_cols is not None:
raise ValueError(
@@ -346,7 +349,7 @@ def write(
with catch_warnings(record=True):
self.api.write(
path,
- df,
+ data,
compression=compression,
write_index=index,
partition_on=partition_cols,
@@ -406,7 +409,7 @@ def read(
@doc(storage_options=_shared_docs["storage_options"])
def to_parquet(
- df: DataFrame,
+ data: DataFrame | Series,
path: FilePath | WriteBuffer[bytes] | None = None,
engine: str = "auto",
compression: str | None = "snappy",
@@ -417,11 +420,11 @@ def to_parquet(
**kwargs,
) -> bytes | None:
"""
- Write a DataFrame to the parquet format.
+ Write a DataFrame or a Series to the parquet format.
Parameters
----------
- df : DataFrame
+ data : DataFrame or Series
path : str, path object, file-like object, or None, default None
String, path object (implementing ``os.PathLike[str]``), or file-like
object implementing a binary ``write()`` function. If None, the result is
@@ -481,7 +484,7 @@ def to_parquet(
path_or_buf: FilePath | WriteBuffer[bytes] = io.BytesIO() if path is None else path
impl.write(
- df,
+ data,
path_or_buf,
compression=compression,
index=index,
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54675 | 2023-08-21T21:43:09Z | 2023-08-25T18:02:23Z | null | 2023-08-25T18:02:23Z |
adding to_parquet to the Series class and modifying the write methods… | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 564c799d7ab66..5c72360dc0864 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -348,7 +348,7 @@ class Series(base.IndexOpsMixin, NDFrame): # type: ignore[misc]
_internal_names_set = {"index", "name"} | NDFrame._internal_names_set
_accessors = {"dt", "cat", "str", "sparse"}
_hidden_attrs = (
- base.IndexOpsMixin._hidden_attrs | NDFrame._hidden_attrs | frozenset([])
+ base.IndexOpsMixin._hidden_attrs | NDFrame._hidden_attrs | frozenset([])
)
# similar to __array_priority__, positions Series after DataFrame
@@ -369,19 +369,19 @@ class Series(base.IndexOpsMixin, NDFrame): # type: ignore[misc]
# Constructors
def __init__(
- self,
- data=None,
- index=None,
- dtype: Dtype | None = None,
- name=None,
- copy: bool | None = None,
- fastpath: bool = False,
+ self,
+ data=None,
+ index=None,
+ dtype: Dtype | None = None,
+ name=None,
+ copy: bool | None = None,
+ fastpath: bool = False,
) -> None:
if (
- isinstance(data, (SingleBlockManager, SingleArrayManager))
- and index is None
- and dtype is None
- and (copy is False or copy is None)
+ isinstance(data, (SingleBlockManager, SingleArrayManager))
+ and index is None
+ and dtype is None
+ and (copy is False or copy is None)
):
if using_copy_on_write():
data = data.copy(deep=False)
@@ -522,7 +522,7 @@ def __init__(
self._set_axis(0, index)
def _init_dict(
- self, data, index: Index | None = None, dtype: DtypeObj | None = None
+ self, data, index: Index | None = None, dtype: DtypeObj | None = None
):
"""
Derive the "_mgr" and "index" attributes of a new Series from a
@@ -1240,10 +1240,10 @@ def __setitem__(self, key, value) -> None:
key = np.asarray(key, dtype=bool)
if (
- is_list_like(value)
- and len(value) != len(self)
- and not isinstance(value, Series)
- and not is_object_dtype(self.dtype)
+ is_list_like(value)
+ and len(value) != len(self)
+ and not isinstance(value, Series)
+ and not is_object_dtype(self.dtype)
):
# Series will be reindexed to have matching length inside
# _where call below
@@ -1394,7 +1394,7 @@ def _check_is_chained_assignment_possible(self) -> bool:
return super()._check_is_chained_assignment_possible()
def _maybe_update_cacher(
- self, clear: bool = False, verify_is_copy: bool = True, inplace: bool = False
+ self, clear: bool = False, verify_is_copy: bool = True, inplace: bool = False
) -> None:
"""
See NDFrame._maybe_update_cacher.__doc__
@@ -1488,48 +1488,48 @@ def repeat(self, repeats: int | Sequence[int], axis: None = None) -> Series:
@overload
def reset_index(
- self,
- level: IndexLabel = ...,
- *,
- drop: Literal[False] = ...,
- name: Level = ...,
- inplace: Literal[False] = ...,
- allow_duplicates: bool = ...,
+ self,
+ level: IndexLabel = ...,
+ *,
+ drop: Literal[False] = ...,
+ name: Level = ...,
+ inplace: Literal[False] = ...,
+ allow_duplicates: bool = ...,
) -> DataFrame:
...
@overload
def reset_index(
- self,
- level: IndexLabel = ...,
- *,
- drop: Literal[True],
- name: Level = ...,
- inplace: Literal[False] = ...,
- allow_duplicates: bool = ...,
+ self,
+ level: IndexLabel = ...,
+ *,
+ drop: Literal[True],
+ name: Level = ...,
+ inplace: Literal[False] = ...,
+ allow_duplicates: bool = ...,
) -> Series:
...
@overload
def reset_index(
- self,
- level: IndexLabel = ...,
- *,
- drop: bool = ...,
- name: Level = ...,
- inplace: Literal[True],
- allow_duplicates: bool = ...,
+ self,
+ level: IndexLabel = ...,
+ *,
+ drop: bool = ...,
+ name: Level = ...,
+ inplace: Literal[True],
+ allow_duplicates: bool = ...,
) -> None:
...
def reset_index(
- self,
- level: IndexLabel | None = None,
- *,
- drop: bool = False,
- name: Level = lib.no_default,
- inplace: bool = False,
- allow_duplicates: bool = False,
+ self,
+ level: IndexLabel | None = None,
+ *,
+ drop: bool = False,
+ name: Level = lib.no_default,
+ inplace: bool = False,
+ allow_duplicates: bool = False,
) -> DataFrame | Series | None:
"""
Generate a new DataFrame or Series with the index reset.
@@ -1686,48 +1686,48 @@ def __repr__(self) -> str:
@overload
def to_string(
- self,
- buf: None = ...,
- na_rep: str = ...,
- float_format: str | None = ...,
- header: bool = ...,
- index: bool = ...,
- length: bool = ...,
- dtype=...,
- name=...,
- max_rows: int | None = ...,
- min_rows: int | None = ...,
+ self,
+ buf: None = ...,
+ na_rep: str = ...,
+ float_format: str | None = ...,
+ header: bool = ...,
+ index: bool = ...,
+ length: bool = ...,
+ dtype=...,
+ name=...,
+ max_rows: int | None = ...,
+ min_rows: int | None = ...,
) -> str:
...
@overload
def to_string(
- self,
- buf: FilePath | WriteBuffer[str],
- na_rep: str = ...,
- float_format: str | None = ...,
- header: bool = ...,
- index: bool = ...,
- length: bool = ...,
- dtype=...,
- name=...,
- max_rows: int | None = ...,
- min_rows: int | None = ...,
+ self,
+ buf: FilePath | WriteBuffer[str],
+ na_rep: str = ...,
+ float_format: str | None = ...,
+ header: bool = ...,
+ index: bool = ...,
+ length: bool = ...,
+ dtype=...,
+ name=...,
+ max_rows: int | None = ...,
+ min_rows: int | None = ...,
) -> None:
...
def to_string(
- self,
- buf: FilePath | WriteBuffer[str] | None = None,
- na_rep: str = "NaN",
- float_format: str | None = None,
- header: bool = True,
- index: bool = True,
- length: bool = False,
- dtype: bool = False,
- name: bool = False,
- max_rows: int | None = None,
- min_rows: int | None = None,
+ self,
+ buf: FilePath | WriteBuffer[str] | None = None,
+ na_rep: str = "NaN",
+ float_format: str | None = None,
+ header: bool = True,
+ index: bool = True,
+ length: bool = False,
+ dtype: bool = False,
+ name: bool = False,
+ max_rows: int | None = None,
+ min_rows: int | None = None,
) -> str | None:
"""
Render a string representation of the Series.
@@ -1832,12 +1832,12 @@ def to_string(
),
)
def to_markdown(
- self,
- buf: IO[str] | None = None,
- mode: str = "wt",
- index: bool = True,
- storage_options: StorageOptions | None = None,
- **kwargs,
+ self,
+ buf: IO[str] | None = None,
+ mode: str = "wt",
+ index: bool = True,
+ storage_options: StorageOptions | None = None,
+ **kwargs,
) -> str | None:
"""
Print {klass} in Markdown-friendly format.
@@ -1876,6 +1876,142 @@ def to_markdown(
# ----------------------------------------------------------------------
+ @overload
+ def to_parquet(
+ self,
+ path: None = ...,
+ engine: Literal["auto", "pyarrow", "fastparquet"] = ...,
+ compression: str | None = ...,
+ index: bool | None = ...,
+ partition_cols: list[str] | None = ...,
+ storage_options: StorageOptions = ...,
+ **kwargs,
+ ) -> bytes:
+ ...
+
+ @overload
+ def to_parquet(
+ self,
+ path: FilePath | WriteBuffer[bytes],
+ engine: Literal["auto", "pyarrow", "fastparquet"] = ...,
+ compression: str | None = ...,
+ index: bool | None = ...,
+ partition_cols: list[str] | None = ...,
+ storage_options: StorageOptions = ...,
+ **kwargs,
+ ) -> None:
+ ...
+
+ @doc(storage_options=_shared_docs["storage_options"])
+ def to_parquet(
+ self,
+ path: FilePath | WriteBuffer[bytes] | None = None,
+ engine: Literal["auto", "pyarrow", "fastparquet"] = "auto",
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ partition_cols: list[str] | None = None,
+ storage_options: StorageOptions | None = None,
+ **kwargs,
+ ) -> bytes | None:
+ """
+ Write a DataFrame to the binary parquet format.
+
+ This function writes the dataframe as a `parquet file
+ <https://parquet.apache.org/>`_. You can choose different parquet
+ backends, and have the option of compression. See
+ :ref:`the user guide <io.parquet>` for more details.
+
+ Parameters
+ ----------
+ path : str, path object, file-like object, or None, default None
+ String, path object (implementing ``os.PathLike[str]``), or file-like
+ object implementing a binary ``write()`` function. If None, the result is
+ returned as bytes. If a string or path, it will be used as Root Directory
+ path when writing a partitioned dataset.
+
+ .. versionchanged:: 1.2.0
+
+ Previously this was "fname"
+
+ engine : {{'auto', 'pyarrow', 'fastparquet'}}, default 'auto'
+ Parquet library to use. If 'auto', then the option
+ ``io.parquet.engine`` is used. The default ``io.parquet.engine``
+ behavior is to try 'pyarrow', falling back to 'fastparquet' if
+ 'pyarrow' is unavailable.
+ compression : str or None, default 'snappy'
+ Name of the compression to use. Use ``None`` for no compression.
+ Supported options: 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'.
+ index : bool, default None
+ If ``True``, include the dataframe's index(es) in the file output.
+ If ``False``, they will not be written to the file.
+ If ``None``, similar to ``True`` the dataframe's index(es)
+ will be saved. However, instead of being saved as values,
+ the RangeIndex will be stored as a range in the metadata so it
+ doesn't require much space and is faster. Other indexes will
+ be included as columns in the file output.
+ partition_cols : list, optional, default None
+ Column names by which to partition the dataset.
+ Columns are partitioned in the order they are given.
+ Must be None if path is not a string.
+ {storage_options}
+
+ .. versionadded:: 1.2.0
+
+ **kwargs
+ Additional arguments passed to the parquet library. See
+ :ref:`pandas io <io.parquet>` for more details.
+
+ Returns
+ -------
+ bytes if no path argument is provided else None
+
+ See Also
+ --------
+ read_parquet : Read a parquet file.
+ DataFrame.to_orc : Write an orc file.
+ DataFrame.to_csv : Write a csv file.
+ DataFrame.to_sql : Write to a sql table.
+ DataFrame.to_hdf : Write to hdf.
+
+ Notes
+ -----
+ This function requires either the `fastparquet
+ <https://pypi.org/project/fastparquet>`_ or `pyarrow
+ <https://arrow.apache.org/docs/python/>`_ library.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame(data={{'col1': [1, 2], 'col2': [3, 4]}})
+ >>> df.to_parquet('df.parquet.gzip',
+ ... compression='gzip') # doctest: +SKIP
+ >>> pd.read_parquet('df.parquet.gzip') # doctest: +SKIP
+ col1 col2
+ 0 1 3
+ 1 2 4
+
+ If you want to get a buffer to the parquet content you can use a io.BytesIO
+ object, as long as you don't use partition_cols, which creates multiple files.
+
+ >>> import io
+ >>> f = io.BytesIO()
+ >>> df.to_parquet(f)
+ >>> f.seek(0)
+ 0
+ >>> content = f.read()
+ """
+ from pandas.io.parquet import to_parquet
+
+ return to_parquet(
+ self,
+ path,
+ engine,
+ compression=compression,
+ index=index,
+ partition_cols=partition_cols,
+ storage_options=storage_options,
+ **kwargs,
+ )
+
def items(self) -> Iterable[tuple[Hashable, Any]]:
"""
Lazily iterate over (index, value) tuples.
@@ -2005,7 +2141,7 @@ def to_frame(self, name: Hashable = lib.no_default) -> DataFrame:
return df.__finalize__(self, method="to_frame")
def _set_name(
- self, name, inplace: bool = False, deep: bool | None = None
+ self, name, inplace: bool = False, deep: bool | None = None
) -> Series:
"""
Set the Series name.
@@ -2110,15 +2246,15 @@ def _set_name(
)
@Appender(_shared_docs["groupby"] % _shared_doc_kwargs)
def groupby(
- self,
- by=None,
- axis: Axis = 0,
- level: IndexLabel | None = None,
- as_index: bool = True,
- sort: bool = True,
- group_keys: bool = True,
- observed: bool | lib.NoDefault = lib.no_default,
- dropna: bool = True,
+ self,
+ by=None,
+ axis: Axis = 0,
+ level: IndexLabel | None = None,
+ as_index: bool = True,
+ sort: bool = True,
+ group_keys: bool = True,
+ observed: bool | lib.NoDefault = lib.no_default,
+ dropna: bool = True,
) -> SeriesGroupBy:
from pandas.core.groupby.generic import SeriesGroupBy
@@ -2288,32 +2424,32 @@ def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation
@overload
def drop_duplicates(
- self,
- *,
- keep: DropKeep = ...,
- inplace: Literal[False] = ...,
- ignore_index: bool = ...,
+ self,
+ *,
+ keep: DropKeep = ...,
+ inplace: Literal[False] = ...,
+ ignore_index: bool = ...,
) -> Series:
...
@overload
def drop_duplicates(
- self, *, keep: DropKeep = ..., inplace: Literal[True], ignore_index: bool = ...
+ self, *, keep: DropKeep = ..., inplace: Literal[True], ignore_index: bool = ...
) -> None:
...
@overload
def drop_duplicates(
- self, *, keep: DropKeep = ..., inplace: bool = ..., ignore_index: bool = ...
+ self, *, keep: DropKeep = ..., inplace: bool = ..., ignore_index: bool = ...
) -> Series | None:
...
def drop_duplicates(
- self,
- *,
- keep: DropKeep = "first",
- inplace: bool = False,
- ignore_index: bool = False,
+ self,
+ *,
+ keep: DropKeep = "first",
+ inplace: bool = False,
+ ignore_index: bool = False,
) -> Series | None:
"""
Return Series with duplicate values removed.
@@ -2694,30 +2830,30 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series:
@overload
def quantile(
- self, q: float = ..., interpolation: QuantileInterpolation = ...
+ self, q: float = ..., interpolation: QuantileInterpolation = ...
) -> float:
...
@overload
def quantile(
- self,
- q: Sequence[float] | AnyArrayLike,
- interpolation: QuantileInterpolation = ...,
+ self,
+ q: Sequence[float] | AnyArrayLike,
+ interpolation: QuantileInterpolation = ...,
) -> Series:
...
@overload
def quantile(
- self,
- q: float | Sequence[float] | AnyArrayLike = ...,
- interpolation: QuantileInterpolation = ...,
+ self,
+ q: float | Sequence[float] | AnyArrayLike = ...,
+ interpolation: QuantileInterpolation = ...,
) -> float | Series:
...
def quantile(
- self,
- q: float | Sequence[float] | AnyArrayLike = 0.5,
- interpolation: QuantileInterpolation = "linear",
+ self,
+ q: float | Sequence[float] | AnyArrayLike = 0.5,
+ interpolation: QuantileInterpolation = "linear",
) -> float | Series:
"""
Return value at the given quantile.
@@ -2779,10 +2915,10 @@ def quantile(
return result.iloc[0]
def corr(
- self,
- other: Series,
- method: CorrelationMethod = "pearson",
- min_periods: int | None = None,
+ self,
+ other: Series,
+ method: CorrelationMethod = "pearson",
+ min_periods: int | None = None,
) -> float:
"""
Compute correlation with `other` Series, excluding missing values.
@@ -2867,10 +3003,10 @@ def corr(
)
def cov(
- self,
- other: Series,
- min_periods: int | None = None,
- ddof: int | None = 1,
+ self,
+ other: Series,
+ min_periods: int | None = None,
+ ddof: int | None = 1,
) -> float:
"""
Compute covariance with Series, excluding missing values.
@@ -3142,10 +3278,10 @@ def __rmatmul__(self, other):
@doc(base.IndexOpsMixin.searchsorted, klass="Series")
# Signature of "searchsorted" incompatible with supertype "IndexOpsMixin"
def searchsorted( # type: ignore[override]
- self,
- value: NumpyValueArrayLike | ExtensionArray,
- side: Literal["left", "right"] = "left",
- sorter: NumpySorter | None = None,
+ self,
+ value: NumpyValueArrayLike | ExtensionArray,
+ side: Literal["left", "right"] = "left",
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
return base.IndexOpsMixin.searchsorted(self, value, side=side, sorter=sorter)
@@ -3153,7 +3289,7 @@ def searchsorted( # type: ignore[override]
# Combination
def _append(
- self, to_append, ignore_index: bool = False, verify_integrity: bool = False
+ self, to_append, ignore_index: bool = False, verify_integrity: bool = False
):
from pandas.core.reshape.concat import concat
@@ -3236,12 +3372,12 @@ def _append(
klass=_shared_doc_kwargs["klass"],
)
def compare(
- self,
- other: Series,
- align_axis: Axis = 1,
- keep_shape: bool = False,
- keep_equal: bool = False,
- result_names: Suffixes = ("self", "other"),
+ self,
+ other: Series,
+ align_axis: Axis = 1,
+ keep_shape: bool = False,
+ keep_equal: bool = False,
+ result_names: Suffixes = ("self", "other"),
) -> DataFrame | Series:
return super().compare(
other=other,
@@ -3252,10 +3388,10 @@ def compare(
)
def combine(
- self,
- other: Series | Hashable,
- func: Callable[[Hashable, Hashable], Hashable],
- fill_value: Hashable | None = None,
+ self,
+ other: Series | Hashable,
+ func: Callable[[Hashable, Hashable], Hashable],
+ fill_value: Hashable | None = None,
) -> Series:
"""
Combine the Series with a Series or scalar according to `func`.
@@ -3502,56 +3638,56 @@ def update(self, other: Series | Sequence | Mapping) -> None:
@overload
def sort_values(
- self,
- *,
- axis: Axis = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: Literal[False] = ...,
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- ignore_index: bool = ...,
- key: ValueKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: Literal[False] = ...,
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ ignore_index: bool = ...,
+ key: ValueKeyFunc = ...,
) -> Series:
...
@overload
def sort_values(
- self,
- *,
- axis: Axis = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: Literal[True],
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- ignore_index: bool = ...,
- key: ValueKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: Literal[True],
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ ignore_index: bool = ...,
+ key: ValueKeyFunc = ...,
) -> None:
...
@overload
def sort_values(
- self,
- *,
- axis: Axis = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: bool = ...,
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- ignore_index: bool = ...,
- key: ValueKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: bool = ...,
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ ignore_index: bool = ...,
+ key: ValueKeyFunc = ...,
) -> Series | None:
...
def sort_values(
- self,
- *,
- axis: Axis = 0,
- ascending: bool | Sequence[bool] = True,
- inplace: bool = False,
- kind: SortKind = "quicksort",
- na_position: NaPosition = "last",
- ignore_index: bool = False,
- key: ValueKeyFunc | None = None,
+ self,
+ *,
+ axis: Axis = 0,
+ ascending: bool | Sequence[bool] = True,
+ inplace: bool = False,
+ kind: SortKind = "quicksort",
+ na_position: NaPosition = "last",
+ ignore_index: bool = False,
+ key: ValueKeyFunc | None = None,
) -> Series | None:
"""
Sort by the values.
@@ -3745,64 +3881,64 @@ def sort_values(
@overload
def sort_index(
- self,
- *,
- axis: Axis = ...,
- level: IndexLabel = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: Literal[True],
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- sort_remaining: bool = ...,
- ignore_index: bool = ...,
- key: IndexKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ level: IndexLabel = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: Literal[True],
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ sort_remaining: bool = ...,
+ ignore_index: bool = ...,
+ key: IndexKeyFunc = ...,
) -> None:
...
@overload
def sort_index(
- self,
- *,
- axis: Axis = ...,
- level: IndexLabel = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: Literal[False] = ...,
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- sort_remaining: bool = ...,
- ignore_index: bool = ...,
- key: IndexKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ level: IndexLabel = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: Literal[False] = ...,
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ sort_remaining: bool = ...,
+ ignore_index: bool = ...,
+ key: IndexKeyFunc = ...,
) -> Series:
...
@overload
def sort_index(
- self,
- *,
- axis: Axis = ...,
- level: IndexLabel = ...,
- ascending: bool | Sequence[bool] = ...,
- inplace: bool = ...,
- kind: SortKind = ...,
- na_position: NaPosition = ...,
- sort_remaining: bool = ...,
- ignore_index: bool = ...,
- key: IndexKeyFunc = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ level: IndexLabel = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: bool = ...,
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ sort_remaining: bool = ...,
+ ignore_index: bool = ...,
+ key: IndexKeyFunc = ...,
) -> Series | None:
...
def sort_index(
- self,
- *,
- axis: Axis = 0,
- level: IndexLabel | None = None,
- ascending: bool | Sequence[bool] = True,
- inplace: bool = False,
- kind: SortKind = "quicksort",
- na_position: NaPosition = "last",
- sort_remaining: bool = True,
- ignore_index: bool = False,
- key: IndexKeyFunc | None = None,
+ self,
+ *,
+ axis: Axis = 0,
+ level: IndexLabel | None = None,
+ ascending: bool | Sequence[bool] = True,
+ inplace: bool = False,
+ kind: SortKind = "quicksort",
+ na_position: NaPosition = "last",
+ sort_remaining: bool = True,
+ ignore_index: bool = False,
+ key: IndexKeyFunc | None = None,
) -> Series | None:
"""
Sort Series by index labels.
@@ -3937,10 +4073,10 @@ def sort_index(
)
def argsort(
- self,
- axis: Axis = 0,
- kind: SortKind = "quicksort",
- order: None = None,
+ self,
+ axis: Axis = 0,
+ kind: SortKind = "quicksort",
+ order: None = None,
) -> Series:
"""
Return the integer indices that would sort the Series values.
@@ -4004,7 +4140,7 @@ def argsort(
return res.__finalize__(self, method="argsort")
def nlargest(
- self, n: int = 5, keep: Literal["first", "last", "all"] = "first"
+ self, n: int = 5, keep: Literal["first", "last", "all"] = "first"
) -> Series:
"""
Return the largest `n` elements.
@@ -4104,7 +4240,7 @@ def nlargest(
return selectn.SelectNSeries(self, n=n, keep=keep).nlargest()
def nsmallest(
- self, n: int = 5, keep: Literal["first", "last", "all"] = "first"
+ self, n: int = 5, keep: Literal["first", "last", "all"] = "first"
) -> Series:
"""
Return the smallest `n` elements.
@@ -4263,7 +4399,7 @@ def nsmallest(
),
)
def swaplevel(
- self, i: Level = -2, j: Level = -1, copy: bool | None = None
+ self, i: Level = -2, j: Level = -1, copy: bool | None = None
) -> Series:
"""
Swap levels i and j in a :class:`MultiIndex`.
@@ -4402,10 +4538,10 @@ def explode(self, ignore_index: bool = False) -> Series:
return self._constructor(values, index=index, name=self.name, copy=False)
def unstack(
- self,
- level: IndexLabel = -1,
- fill_value: Hashable | None = None,
- sort: bool = True,
+ self,
+ level: IndexLabel = -1,
+ fill_value: Hashable | None = None,
+ sort: bool = True,
) -> DataFrame:
"""
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
@@ -4458,9 +4594,9 @@ def unstack(
# function application
def map(
- self,
- arg: Callable | Mapping | Series,
- na_action: Literal["ignore"] | None = None,
+ self,
+ arg: Callable | Mapping | Series,
+ na_action: Literal["ignore"] | None = None,
) -> Series:
"""
Map values of Series according to an input mapping or function.
@@ -4614,7 +4750,7 @@ def aggregate(self, func=None, axis: Axis = 0, *args, **kwargs):
axis=_shared_doc_kwargs["axis"],
)
def transform(
- self, func: AggFuncType, axis: Axis = 0, *args, **kwargs
+ self, func: AggFuncType, axis: Axis = 0, *args, **kwargs
) -> DataFrame | Series:
# Validate axis argument
self._get_axis_number(axis)
@@ -4623,13 +4759,13 @@ def transform(
return result
def apply(
- self,
- func: AggFuncType,
- convert_dtype: bool | lib.NoDefault = lib.no_default,
- args: tuple[Any, ...] = (),
- *,
- by_row: Literal[False, "compat"] = "compat",
- **kwargs,
+ self,
+ func: AggFuncType,
+ convert_dtype: bool | lib.NoDefault = lib.no_default,
+ args: tuple[Any, ...] = (),
+ *,
+ by_row: Literal[False, "compat"] = "compat",
+ **kwargs,
) -> DataFrame | Series:
"""
Invoke function on values of Series.
@@ -4765,15 +4901,15 @@ def apply(
).apply()
def _reindex_indexer(
- self,
- new_index: Index | None,
- indexer: npt.NDArray[np.intp] | None,
- copy: bool | None,
+ self,
+ new_index: Index | None,
+ indexer: npt.NDArray[np.intp] | None,
+ copy: bool | None,
) -> Series:
# Note: new_index is None iff indexer is None
# if not None, indexer is np.intp
if indexer is None and (
- new_index is None or new_index.names == self.index.names
+ new_index is None or new_index.names == self.index.names
):
if using_copy_on_write():
return self.copy(deep=copy)
@@ -4795,52 +4931,52 @@ def _needs_reindex_multi(self, axes, method, level) -> bool:
@overload
def rename(
- self,
- index: Renamer | Hashable | None = ...,
- *,
- axis: Axis | None = ...,
- copy: bool = ...,
- inplace: Literal[True],
- level: Level | None = ...,
- errors: IgnoreRaise = ...,
+ self,
+ index: Renamer | Hashable | None = ...,
+ *,
+ axis: Axis | None = ...,
+ copy: bool = ...,
+ inplace: Literal[True],
+ level: Level | None = ...,
+ errors: IgnoreRaise = ...,
) -> None:
...
@overload
def rename(
- self,
- index: Renamer | Hashable | None = ...,
- *,
- axis: Axis | None = ...,
- copy: bool = ...,
- inplace: Literal[False] = ...,
- level: Level | None = ...,
- errors: IgnoreRaise = ...,
+ self,
+ index: Renamer | Hashable | None = ...,
+ *,
+ axis: Axis | None = ...,
+ copy: bool = ...,
+ inplace: Literal[False] = ...,
+ level: Level | None = ...,
+ errors: IgnoreRaise = ...,
) -> Series:
...
@overload
def rename(
- self,
- index: Renamer | Hashable | None = ...,
- *,
- axis: Axis | None = ...,
- copy: bool = ...,
- inplace: bool = ...,
- level: Level | None = ...,
- errors: IgnoreRaise = ...,
+ self,
+ index: Renamer | Hashable | None = ...,
+ *,
+ axis: Axis | None = ...,
+ copy: bool = ...,
+ inplace: bool = ...,
+ level: Level | None = ...,
+ errors: IgnoreRaise = ...,
) -> Series | None:
...
def rename(
- self,
- index: Renamer | Hashable | None = None,
- *,
- axis: Axis | None = None,
- copy: bool | None = None,
- inplace: bool = False,
- level: Level | None = None,
- errors: IgnoreRaise = "ignore",
+ self,
+ index: Renamer | Hashable | None = None,
+ *,
+ axis: Axis | None = None,
+ copy: bool | None = None,
+ inplace: bool = False,
+ level: Level | None = None,
+ errors: IgnoreRaise = "ignore",
) -> Series | None:
"""
Alter Series index labels or name.
@@ -4953,11 +5089,11 @@ def rename(
)
@Appender(NDFrame.set_axis.__doc__)
def set_axis(
- self,
- labels,
- *,
- axis: Axis = 0,
- copy: bool | None = None,
+ self,
+ labels,
+ *,
+ axis: Axis = 0,
+ copy: bool | None = None,
) -> Series:
return super().set_axis(labels, axis=axis, copy=copy)
@@ -4968,16 +5104,16 @@ def set_axis(
optional_reindex=_shared_doc_kwargs["optional_reindex"],
)
def reindex( # type: ignore[override]
- self,
- index=None,
- *,
- axis: Axis | None = None,
- method: ReindexMethod | None = None,
- copy: bool | None = None,
- level: Level | None = None,
- fill_value: Scalar | None = None,
- limit: int | None = None,
- tolerance=None,
+ self,
+ index=None,
+ *,
+ axis: Axis | None = None,
+ method: ReindexMethod | None = None,
+ copy: bool | None = None,
+ level: Level | None = None,
+ fill_value: Scalar | None = None,
+ limit: int | None = None,
+ tolerance=None,
) -> Series:
return super().reindex(
index=index,
@@ -4991,13 +5127,13 @@ def reindex( # type: ignore[override]
@doc(NDFrame.rename_axis)
def rename_axis( # type: ignore[override]
- self,
- mapper: IndexLabel | lib.NoDefault = lib.no_default,
- *,
- index=lib.no_default,
- axis: Axis = 0,
- copy: bool = True,
- inplace: bool = False,
+ self,
+ mapper: IndexLabel | lib.NoDefault = lib.no_default,
+ *,
+ index=lib.no_default,
+ axis: Axis = 0,
+ copy: bool = True,
+ inplace: bool = False,
) -> Self | None:
return super().rename_axis(
mapper=mapper,
@@ -5009,56 +5145,56 @@ def rename_axis( # type: ignore[override]
@overload
def drop(
- self,
- labels: IndexLabel = ...,
- *,
- axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
- level: Level | None = ...,
- inplace: Literal[True],
- errors: IgnoreRaise = ...,
+ self,
+ labels: IndexLabel = ...,
+ *,
+ axis: Axis = ...,
+ index: IndexLabel = ...,
+ columns: IndexLabel = ...,
+ level: Level | None = ...,
+ inplace: Literal[True],
+ errors: IgnoreRaise = ...,
) -> None:
...
@overload
def drop(
- self,
- labels: IndexLabel = ...,
- *,
- axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
- level: Level | None = ...,
- inplace: Literal[False] = ...,
- errors: IgnoreRaise = ...,
+ self,
+ labels: IndexLabel = ...,
+ *,
+ axis: Axis = ...,
+ index: IndexLabel = ...,
+ columns: IndexLabel = ...,
+ level: Level | None = ...,
+ inplace: Literal[False] = ...,
+ errors: IgnoreRaise = ...,
) -> Series:
...
@overload
def drop(
- self,
- labels: IndexLabel = ...,
- *,
- axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
- level: Level | None = ...,
- inplace: bool = ...,
- errors: IgnoreRaise = ...,
+ self,
+ labels: IndexLabel = ...,
+ *,
+ axis: Axis = ...,
+ index: IndexLabel = ...,
+ columns: IndexLabel = ...,
+ level: Level | None = ...,
+ inplace: bool = ...,
+ errors: IgnoreRaise = ...,
) -> Series | None:
...
def drop(
- self,
- labels: IndexLabel | None = None,
- *,
- axis: Axis = 0,
- index: IndexLabel | None = None,
- columns: IndexLabel | None = None,
- level: Level | None = None,
- inplace: bool = False,
- errors: IgnoreRaise = "raise",
+ self,
+ labels: IndexLabel | None = None,
+ *,
+ axis: Axis = 0,
+ index: IndexLabel | None = None,
+ columns: IndexLabel | None = None,
+ level: Level | None = None,
+ inplace: bool = False,
+ errors: IgnoreRaise = "raise",
) -> Series | None:
"""
Return Series with specified index labels removed.
@@ -5185,12 +5321,12 @@ def pop(self, item: Hashable) -> Any:
@doc(INFO_DOCSTRING, **series_sub_kwargs)
def info(
- self,
- verbose: bool | None = None,
- buf: IO[str] | None = None,
- max_cols: int | None = None,
- memory_usage: bool | str | None = None,
- show_counts: bool = True,
+ self,
+ verbose: bool | None = None,
+ buf: IO[str] | None = None,
+ max_cols: int | None = None,
+ memory_usage: bool | str | None = None,
+ show_counts: bool = True,
) -> None:
return SeriesInfo(self, memory_usage).render(
buf=buf,
@@ -5354,10 +5490,10 @@ def isin(self, values) -> Series:
)
def between(
- self,
- left,
- right,
- inclusive: Literal["both", "neither", "left", "right"] = "both",
+ self,
+ left,
+ right,
+ inclusive: Literal["both", "neither", "left", "right"] = "both",
) -> Series:
"""
Return boolean Series equivalent to left <= series <= right.
@@ -5450,13 +5586,13 @@ def between(
# Convert to types that support pd.NA
def _convert_dtypes(
- self,
- infer_objects: bool = True,
- convert_string: bool = True,
- convert_integer: bool = True,
- convert_boolean: bool = True,
- convert_floating: bool = True,
- dtype_backend: DtypeBackend = "numpy_nullable",
+ self,
+ infer_objects: bool = True,
+ convert_string: bool = True,
+ convert_integer: bool = True,
+ convert_boolean: bool = True,
+ convert_floating: bool = True,
+ dtype_backend: DtypeBackend = "numpy_nullable",
) -> Series:
input_series = self
if infer_objects:
@@ -5507,33 +5643,33 @@ def notnull(self) -> Series:
@overload
def dropna(
- self,
- *,
- axis: Axis = ...,
- inplace: Literal[False] = ...,
- how: AnyAll | None = ...,
- ignore_index: bool = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ inplace: Literal[False] = ...,
+ how: AnyAll | None = ...,
+ ignore_index: bool = ...,
) -> Series:
...
@overload
def dropna(
- self,
- *,
- axis: Axis = ...,
- inplace: Literal[True],
- how: AnyAll | None = ...,
- ignore_index: bool = ...,
+ self,
+ *,
+ axis: Axis = ...,
+ inplace: Literal[True],
+ how: AnyAll | None = ...,
+ ignore_index: bool = ...,
) -> None:
...
def dropna(
- self,
- *,
- axis: Axis = 0,
- inplace: bool = False,
- how: AnyAll | None = None,
- ignore_index: bool = False,
+ self,
+ *,
+ axis: Axis = 0,
+ inplace: bool = False,
+ how: AnyAll | None = None,
+ ignore_index: bool = False,
) -> Series | None:
"""
Return a new Series with missing values removed.
@@ -5626,10 +5762,10 @@ def dropna(
# Time series-oriented methods
def to_timestamp(
- self,
- freq=None,
- how: Literal["s", "e", "start", "end"] = "start",
- copy: bool | None = None,
+ self,
+ freq=None,
+ how: Literal["s", "e", "start", "end"] = "start",
+ copy: bool | None = None,
) -> Series:
"""
Cast to DatetimeIndex of Timestamps, at *beginning* of period.
@@ -5831,8 +5967,8 @@ def _align_for_op(self, right, align_asobject: bool = False):
if not left.index.equals(right.index):
if align_asobject:
if left.dtype not in (object, np.bool_) or right.dtype not in (
- object,
- np.bool_,
+ object,
+ np.bool_,
):
warnings.warn(
"Operation between non boolean Series with different "
@@ -5884,7 +6020,7 @@ def _binop(self, other: Series, func, level=None, fill_value=None) -> Series:
return cast(Series, out)
def _construct_result(
- self, result: ArrayLike | tuple[ArrayLike, ArrayLike], name: Hashable
+ self, result: ArrayLike | tuple[ArrayLike, ArrayLike], name: Hashable
) -> Series | tuple[Series, Series]:
"""
Construct an appropriately-labelled Series from the result of an op.
@@ -6006,11 +6142,11 @@ def rsub(self, other, level=None, fill_value=None, axis: Axis = 0):
@Appender(ops.make_flex_doc("mul", "series"))
def mul(
- self,
- other,
- level: Level | None = None,
- fill_value: float | None = None,
- axis: Axis = 0,
+ self,
+ other,
+ level: Level | None = None,
+ fill_value: float | None = None,
+ axis: Axis = 0,
):
return self._flex_method(
other, operator.mul, level=level, fill_value=fill_value, axis=axis
@@ -6093,16 +6229,16 @@ def rdivmod(self, other, level=None, fill_value=None, axis: Axis = 0):
# Reductions
def _reduce(
- self,
- op,
- # error: Variable "pandas.core.series.Series.str" is not valid as a type
- name: str, # type: ignore[valid-type]
- *,
- axis: Axis = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- filter_type=None,
- **kwds,
+ self,
+ op,
+ # error: Variable "pandas.core.series.Series.str" is not valid as a type
+ name: str, # type: ignore[valid-type]
+ *,
+ axis: Axis = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ filter_type=None,
+ **kwds,
):
"""
Perform a reduction operation.
@@ -6136,12 +6272,12 @@ def _reduce(
@Appender(make_doc("any", ndim=1))
# error: Signature of "any" incompatible with supertype "NDFrame"
def any( # type: ignore[override]
- self,
- *,
- axis: Axis = 0,
- bool_only: bool = False,
- skipna: bool = True,
- **kwargs,
+ self,
+ *,
+ axis: Axis = 0,
+ bool_only: bool = False,
+ skipna: bool = True,
+ **kwargs,
) -> bool:
nv.validate_logical_func((), kwargs, fname="any")
validate_bool_kwarg(skipna, "skipna", none_allowed=False)
@@ -6156,11 +6292,11 @@ def any( # type: ignore[override]
@Appender(make_doc("all", ndim=1))
def all(
- self,
- axis: Axis = 0,
- bool_only: bool = False,
- skipna: bool = True,
- **kwargs,
+ self,
+ axis: Axis = 0,
+ bool_only: bool = False,
+ skipna: bool = True,
+ **kwargs,
) -> bool:
nv.validate_logical_func((), kwargs, fname="all")
validate_bool_kwarg(skipna, "skipna", none_allowed=False)
@@ -6175,116 +6311,116 @@ def all(
@doc(make_doc("min", ndim=1))
def min(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.min(self, axis, skipna, numeric_only, **kwargs)
@doc(make_doc("max", ndim=1))
def max(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.max(self, axis, skipna, numeric_only, **kwargs)
@doc(make_doc("sum", ndim=1))
def sum(
- self,
- axis: Axis | None = None,
- skipna: bool = True,
- numeric_only: bool = False,
- min_count: int = 0,
- **kwargs,
+ self,
+ axis: Axis | None = None,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ min_count: int = 0,
+ **kwargs,
):
return NDFrame.sum(self, axis, skipna, numeric_only, min_count, **kwargs)
@doc(make_doc("prod", ndim=1))
def prod(
- self,
- axis: Axis | None = None,
- skipna: bool = True,
- numeric_only: bool = False,
- min_count: int = 0,
- **kwargs,
+ self,
+ axis: Axis | None = None,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ min_count: int = 0,
+ **kwargs,
):
return NDFrame.prod(self, axis, skipna, numeric_only, min_count, **kwargs)
@doc(make_doc("mean", ndim=1))
def mean(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
@doc(make_doc("median", ndim=1))
def median(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.median(self, axis, skipna, numeric_only, **kwargs)
@doc(make_doc("sem", ndim=1))
def sem(
- self,
- axis: Axis | None = None,
- skipna: bool = True,
- ddof: int = 1,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = None,
+ skipna: bool = True,
+ ddof: int = 1,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.sem(self, axis, skipna, ddof, numeric_only, **kwargs)
@doc(make_doc("var", ndim=1))
def var(
- self,
- axis: Axis | None = None,
- skipna: bool = True,
- ddof: int = 1,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = None,
+ skipna: bool = True,
+ ddof: int = 1,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.var(self, axis, skipna, ddof, numeric_only, **kwargs)
@doc(make_doc("std", ndim=1))
def std(
- self,
- axis: Axis | None = None,
- skipna: bool = True,
- ddof: int = 1,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = None,
+ skipna: bool = True,
+ ddof: int = 1,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.std(self, axis, skipna, ddof, numeric_only, **kwargs)
@doc(make_doc("skew", ndim=1))
def skew(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.skew(self, axis, skipna, numeric_only, **kwargs)
@doc(make_doc("kurt", ndim=1))
def kurt(
- self,
- axis: Axis | None = 0,
- skipna: bool = True,
- numeric_only: bool = False,
- **kwargs,
+ self,
+ axis: Axis | None = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ **kwargs,
):
return NDFrame.kurt(self, axis, skipna, numeric_only, **kwargs)
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index f51b98a929440..67388bc465192 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -8,6 +8,7 @@
TYPE_CHECKING,
Any,
Literal,
+ overload,
)
import warnings
from warnings import catch_warnings
@@ -25,6 +26,7 @@
from pandas import (
DataFrame,
get_option,
+ Series,
)
from pandas.core.shared_docs import _shared_docs
@@ -82,11 +84,11 @@ def get_engine(engine: str) -> BaseImpl:
def _get_path_or_handle(
- path: FilePath | ReadBuffer[bytes] | WriteBuffer[bytes],
- fs: Any,
- storage_options: StorageOptions | None = None,
- mode: str = "rb",
- is_dir: bool = False,
+ path: FilePath | ReadBuffer[bytes] | WriteBuffer[bytes],
+ fs: Any,
+ storage_options: StorageOptions | None = None,
+ mode: str = "rb",
+ is_dir: bool = False,
) -> tuple[
FilePath | ReadBuffer[bytes] | WriteBuffer[bytes], IOHandles[bytes] | None, Any
]:
@@ -128,10 +130,10 @@ def _get_path_or_handle(
handles = None
if (
- not fs
- and not is_dir
- and isinstance(path_or_handle, str)
- and not os.path.isdir(path_or_handle)
+ not fs
+ and not is_dir
+ and isinstance(path_or_handle, str)
+ and not os.path.isdir(path_or_handle)
):
# use get_handle only when we are very certain that it is not a directory
# fsspec resources can also point to directories
@@ -146,9 +148,9 @@ def _get_path_or_handle(
class BaseImpl:
@staticmethod
- def validate_dataframe(df: DataFrame) -> None:
- if not isinstance(df, DataFrame):
- raise ValueError("to_parquet only supports IO with DataFrames")
+ def validate_data(data: DataFrame | Series) -> None:
+ if not isinstance(data, DataFrame) and not isinstance(data, Series):
+ raise ValueError("to_parquet only supports IO with DataFrames and Series")
def write(self, df: DataFrame, path, compression, **kwargs):
raise AbstractMethodError(self)
@@ -169,29 +171,32 @@ def __init__(self) -> None:
self.api = pyarrow
+
def write(
- self,
- df: DataFrame,
- path: FilePath | WriteBuffer[bytes],
- compression: str | None = "snappy",
- index: bool | None = None,
- storage_options: StorageOptions | None = None,
- partition_cols: list[str] | None = None,
- filesystem=None,
- **kwargs,
+ self,
+ data: DataFrame | Series,
+ path: FilePath | WriteBuffer[bytes],
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ storage_options: StorageOptions | None = None,
+ partition_cols: list[str] | None = None,
+ filesystem=None,
+ **kwargs,
) -> None:
- self.validate_dataframe(df)
+ self.validate_data(data)
from_pandas_kwargs: dict[str, Any] = {"schema": kwargs.pop("schema", None)}
if index is not None:
from_pandas_kwargs["preserve_index"] = index
+ if isinstance(data, Series):
+ table = self.api.Table.from_pandas(data.to_frame(), **from_pandas_kwargs)
+ else:
+ table = self.api.Table.from_pandas(data, **from_pandas_kwargs)
- table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
-
- if df.attrs:
- df_metadata = {"PANDAS_ATTRS": json.dumps(df.attrs)}
+ if data.attrs:
+ data_metadata = {"PANDAS_ATTRS": json.dumps(data.attrs)}
existing_metadata = table.schema.metadata
- merged_metadata = {**existing_metadata, **df_metadata}
+ merged_metadata = {**existing_metadata, **data_metadata}
table = table.replace_schema_metadata(merged_metadata)
path_or_handle, handles, filesystem = _get_path_or_handle(
@@ -202,9 +207,9 @@ def write(
is_dir=partition_cols is not None,
)
if (
- isinstance(path_or_handle, io.BufferedWriter)
- and hasattr(path_or_handle, "name")
- and isinstance(path_or_handle.name, (str, bytes))
+ isinstance(path_or_handle, io.BufferedWriter)
+ and hasattr(path_or_handle, "name")
+ and isinstance(path_or_handle.name, (str, bytes))
):
path_or_handle = path_or_handle.name
if isinstance(path_or_handle, bytes):
@@ -235,15 +240,15 @@ def write(
handles.close()
def read(
- self,
- path,
- columns=None,
- filters=None,
- use_nullable_dtypes: bool = False,
- dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
- storage_options: StorageOptions | None = None,
- filesystem=None,
- **kwargs,
+ self,
+ path,
+ columns=None,
+ filters=None,
+ use_nullable_dtypes: bool = False,
+ dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
+ storage_options: StorageOptions | None = None,
+ filesystem=None,
+ **kwargs,
) -> DataFrame:
kwargs["use_pandas_metadata"] = True
@@ -301,17 +306,17 @@ def __init__(self) -> None:
self.api = fastparquet
def write(
- self,
- df: DataFrame,
- path,
- compression: Literal["snappy", "gzip", "brotli"] | None = "snappy",
- index=None,
- partition_cols=None,
- storage_options: StorageOptions | None = None,
- filesystem=None,
- **kwargs,
+ self,
+ data: DataFrame | Series,
+ path,
+ compression: Literal["snappy", "gzip", "brotli"] | None = "snappy",
+ index=None,
+ partition_cols=None,
+ storage_options: StorageOptions | None = None,
+ filesystem=None,
+ **kwargs,
) -> None:
- self.validate_dataframe(df)
+ self.validate_data(data)
if "partition_on" in kwargs and partition_cols is not None:
raise ValueError(
@@ -346,7 +351,7 @@ def write(
with catch_warnings(record=True):
self.api.write(
path,
- df,
+ data,
compression=compression,
write_index=index,
partition_on=partition_cols,
@@ -354,13 +359,13 @@ def write(
)
def read(
- self,
- path,
- columns=None,
- filters=None,
- storage_options: StorageOptions | None = None,
- filesystem=None,
- **kwargs,
+ self,
+ path,
+ columns=None,
+ filters=None,
+ storage_options: StorageOptions | None = None,
+ filesystem=None,
+ **kwargs,
) -> DataFrame:
parquet_kwargs: dict[str, Any] = {}
use_nullable_dtypes = kwargs.pop("use_nullable_dtypes", False)
@@ -404,24 +409,54 @@ def read(
handles.close()
+@overload
+def to_parquet(
+ data: DataFrame,
+ path: FilePath | WriteBuffer[bytes] | None = None,
+ engine: str = "auto",
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ storage_options: StorageOptions | None = None,
+ partition_cols: list[str] | None = None,
+ filesystem: Any = None,
+ **kwargs,
+) -> bytes | None:
+ ...
+
+
+@overload
+def to_parquet(
+ data: Series,
+ path: FilePath | WriteBuffer[bytes] | None = None,
+ engine: str = "auto",
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ storage_options: StorageOptions | None = None,
+ partition_cols: list[str] | None = None,
+ filesystem: Any = None,
+ **kwargs,
+) -> bytes | None:
+ ...
+
+
@doc(storage_options=_shared_docs["storage_options"])
def to_parquet(
- df: DataFrame,
- path: FilePath | WriteBuffer[bytes] | None = None,
- engine: str = "auto",
- compression: str | None = "snappy",
- index: bool | None = None,
- storage_options: StorageOptions | None = None,
- partition_cols: list[str] | None = None,
- filesystem: Any = None,
- **kwargs,
+ data: DataFrame | Series,
+ path: FilePath | WriteBuffer[bytes] | None = None,
+ engine: str = "auto",
+ compression: str | None = "snappy",
+ index: bool | None = None,
+ storage_options: StorageOptions | None = None,
+ partition_cols: list[str] | None = None,
+ filesystem: Any = None,
+ **kwargs,
) -> bytes | None:
"""
- Write a DataFrame to the parquet format.
+ Write a DataFrame or a Series to the parquet format.
Parameters
----------
- df : DataFrame
+ data : DataFrame or Series
path : str, path object, file-like object, or None, default None
String, path object (implementing ``os.PathLike[str]``), or file-like
object implementing a binary ``write()`` function. If None, the result is
@@ -481,7 +516,7 @@ def to_parquet(
path_or_buf: FilePath | WriteBuffer[bytes] = io.BytesIO() if path is None else path
impl.write(
- df,
+ data,
path_or_buf,
compression=compression,
index=index,
@@ -500,15 +535,15 @@ def to_parquet(
@doc(storage_options=_shared_docs["storage_options"])
def read_parquet(
- path: FilePath | ReadBuffer[bytes],
- engine: str = "auto",
- columns: list[str] | None = None,
- storage_options: StorageOptions | None = None,
- use_nullable_dtypes: bool | lib.NoDefault = lib.no_default,
- dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
- filesystem: Any = None,
- filters: list[tuple] | list[list[tuple]] | None = None,
- **kwargs,
+ path: FilePath | ReadBuffer[bytes],
+ engine: str = "auto",
+ columns: list[str] | None = None,
+ storage_options: StorageOptions | None = None,
+ use_nullable_dtypes: bool | lib.NoDefault = lib.no_default,
+ dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
+ filesystem: Any = None,
+ filters: list[tuple] | list[list[tuple]] | None = None,
+ **kwargs,
) -> DataFrame:
"""
Load a parquet object from the file path, returning a DataFrame.
| … of the parquet engines to take either series or dfs
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54674 | 2023-08-21T21:17:47Z | 2023-08-21T21:18:45Z | null | 2023-08-21T21:39:50Z |
CLN: assorted | diff --git a/pandas/core/array_algos/take.py b/pandas/core/array_algos/take.py
index 8ea70e2694d92..ac674e31586e7 100644
--- a/pandas/core/array_algos/take.py
+++ b/pandas/core/array_algos/take.py
@@ -66,8 +66,7 @@ def take_nd(
"""
Specialized Cython take which sets NaN values in one pass
- This dispatches to ``take`` defined on ExtensionArrays. It does not
- currently dispatch to ``SparseArray.take`` for sparse ``arr``.
+ This dispatches to ``take`` defined on ExtensionArrays.
Note: this function assumes that the indexer is a valid(ated) indexer with
no out of bound indices.
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 3a6db34b0e8b5..3827b5b5d40b2 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -2042,6 +2042,7 @@ def _where(self, mask: npt.NDArray[np.bool_], value) -> Self:
result[~mask] = val
return result
+ # TODO(3.0): this can be removed once GH#33302 deprecation is enforced
def _fill_mask_inplace(
self, method: str, limit: int | None, mask: npt.NDArray[np.bool_]
) -> None:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index aefc94ebd665c..dae0fb7782791 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2898,17 +2898,15 @@ def _delegate_method(self, name: str, *args, **kwargs):
# utility routines
-def _get_codes_for_values(values, categories: Index) -> np.ndarray:
+def _get_codes_for_values(
+ values: Index | Series | ExtensionArray | np.ndarray,
+ categories: Index,
+) -> np.ndarray:
"""
utility routine to turn values into codes given the specified categories
If `values` is known to be a Categorical, use recode_for_categories instead.
"""
- if values.ndim > 1:
- flat = values.ravel()
- codes = _get_codes_for_values(flat, categories)
- return codes.reshape(values.shape)
-
codes = categories.get_indexer_for(values)
return coerce_indexer_dtype(codes, categories)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 27e9bf8958ab0..0e857626b5697 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -420,6 +420,7 @@ def is_terminal() -> bool:
def use_inf_as_na_cb(key) -> None:
+ # TODO(3.0): enforcing this deprecation will close GH#52501
from pandas.core.dtypes.missing import _use_inf_as_na
_use_inf_as_na(key)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 657cbce40087a..aa228191adc62 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1707,8 +1707,6 @@ def can_hold_element(arr: ArrayLike, element: Any) -> bool:
arr._validate_setitem_value(element)
return True
except (ValueError, TypeError):
- # TODO: re-use _catch_deprecated_value_error to ensure we are
- # strict about what exceptions we allow through here.
return False
# This is technically incorrect, but maintains the behavior of
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index a9618963e0a51..59939057d4b37 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -985,6 +985,7 @@ def __new__(cls, freq):
if isinstance(freq, BDay):
# GH#53446
+ # TODO(3.0): enforcing this will close GH#10575
warnings.warn(
"PeriodDtype[B] is deprecated and will be removed in a future "
"version. Use a DatetimeIndex with freq='B' instead",
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7da41b890598d..32069575c807b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -255,8 +255,6 @@ class NDFrame(PandasObject, indexing.IndexingMixin):
"_is_copy",
"_name",
"_metadata",
- "__array_struct__",
- "__array_interface__",
"_flags",
]
_internal_names_set: set[str] = set(_internal_names)
@@ -6970,6 +6968,9 @@ def _pad_or_backfill(
method = clean_fill_method(method)
if not self._mgr.is_single_block and axis == 1:
+ # e.g. test_align_fill_method
+ # TODO(3.0): once downcast is removed, we can do the .T
+ # in all axis=1 cases, and remove axis kward from mgr.pad_or_backfill.
if inplace:
raise NotImplementedError()
result = self.T._pad_or_backfill(method=method, limit=limit).T
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index ea92fbae9566d..95cb114c1472a 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -441,6 +441,8 @@ def indexer(self):
@final
@property
def obj(self):
+ # TODO(3.0): enforcing these deprecations on Grouper should close
+ # GH#25564, GH#41930
warnings.warn(
f"{type(self).__name__}.obj is deprecated and will be removed "
"in a future version. Use GroupBy.indexer instead.",
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 796aadf9e4061..93b99b7647fc0 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3782,9 +3782,15 @@ def get_loc(self, key):
self._check_indexing_error(key)
raise
- _index_shared_docs[
- "get_indexer"
- ] = """
+ @final
+ def get_indexer(
+ self,
+ target,
+ method: ReindexMethod | None = None,
+ limit: int | None = None,
+ tolerance=None,
+ ) -> npt.NDArray[np.intp]:
+ """
Compute indexer and mask for new index given the current index.
The indexer should be then used as an input to ndarray.take to align the
@@ -3792,7 +3798,7 @@ def get_loc(self, key):
Parameters
----------
- target : %(target_klass)s
+ target : Index
method : {None, 'pad'/'ffill', 'backfill'/'bfill', 'nearest'}, optional
* default: exact matches only.
* pad / ffill: find the PREVIOUS index value if no exact match.
@@ -3819,7 +3825,7 @@ def get_loc(self, key):
Integers from 0 to n - 1 indicating that the index at these
positions matches the corresponding target values. Missing values
in the target are marked by -1.
- %(raises_section)s
+
Notes
-----
Returns -1 for unmatched values, for further explanation see the
@@ -3834,16 +3840,6 @@ def get_loc(self, key):
Notice that the return value is an array of locations in ``index``
and ``x`` is marked by -1, as it is not in ``index``.
"""
-
- @Appender(_index_shared_docs["get_indexer"] % _index_doc_kwargs)
- @final
- def get_indexer(
- self,
- target,
- method: ReindexMethod | None = None,
- limit: int | None = None,
- tolerance=None,
- ) -> npt.NDArray[np.intp]:
method = clean_reindex_fill_method(method)
orig_target = target
target = self._maybe_cast_listlike_indexer(target)
@@ -3898,7 +3894,7 @@ def get_indexer(
return ensure_platform_int(indexer)
- pself, ptarget = self._maybe_promote(target)
+ pself, ptarget = self._maybe_downcast_for_indexing(target)
if pself is not self or ptarget is not target:
return pself.get_indexer(
ptarget, method=method, limit=limit, tolerance=tolerance
@@ -4582,7 +4578,7 @@ def join(
if not self._is_multi and not other._is_multi:
# We have specific handling for MultiIndex below
- pself, pother = self._maybe_promote(other)
+ pself, pother = self._maybe_downcast_for_indexing(other)
if pself is not self or pother is not other:
return pself.join(
pother, how=how, level=level, return_indexers=True, sort=sort
@@ -6046,7 +6042,7 @@ def get_indexer_non_unique(
# that can be matched to Interval scalars.
return self._get_indexer_non_comparable(target, method=None, unique=False)
- pself, ptarget = self._maybe_promote(target)
+ pself, ptarget = self._maybe_downcast_for_indexing(target)
if pself is not self or ptarget is not target:
return pself.get_indexer_non_unique(ptarget)
@@ -6062,8 +6058,8 @@ def get_indexer_non_unique(
# TODO: get_indexer has fastpaths for both Categorical-self and
# Categorical-target. Can we do something similar here?
- # Note: _maybe_promote ensures we never get here with MultiIndex
- # self and non-Multi target
+ # Note: _maybe_downcast_for_indexing ensures we never get here
+ # with MultiIndex self and non-Multi target
tgt_values = target._get_engine_target()
if self._is_multi and target._is_multi:
engine = self._engine
@@ -6237,7 +6233,7 @@ def _index_as_unique(self) -> bool:
_requires_unique_msg = "Reindexing only valid with uniquely valued Index objects"
@final
- def _maybe_promote(self, other: Index) -> tuple[Index, Index]:
+ def _maybe_downcast_for_indexing(self, other: Index) -> tuple[Index, Index]:
"""
When dealing with an object-dtype Index and a non-object Index, see
if we can upcast the object-dtype one to improve performance.
@@ -6278,7 +6274,7 @@ def _maybe_promote(self, other: Index) -> tuple[Index, Index]:
if not is_object_dtype(self.dtype) and is_object_dtype(other.dtype):
# Reverse op so we dont need to re-implement on the subclasses
- other, self = other._maybe_promote(self)
+ other, self = other._maybe_downcast_for_indexing(self)
return self, other
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index eca0df67ff054..ffaeef14e42a5 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -27,7 +27,6 @@
BlockValuesRefs,
)
from pandas._libs.missing import NA
-from pandas._libs.tslibs import IncompatibleFrequency
from pandas._typing import (
ArrayLike,
AxisInt,
@@ -1731,9 +1730,7 @@ def setitem(self, indexer, value, using_cow: bool = False):
try:
values[indexer] = value
- except (ValueError, TypeError) as err:
- _catch_deprecated_value_error(err)
-
+ except (ValueError, TypeError):
if isinstance(self.dtype, IntervalDtype):
# see TestSetitemFloatIntervalWithIntIntervalValues
nb = self.coerce_to_target_dtype(orig_value, warn_on_upcast=True)
@@ -1776,9 +1773,7 @@ def where(
try:
res_values = arr._where(cond, other).T
- except (ValueError, TypeError) as err:
- _catch_deprecated_value_error(err)
-
+ except (ValueError, TypeError):
if self.ndim == 1 or self.shape[0] == 1:
if isinstance(self.dtype, IntervalDtype):
# TestSetitemFloatIntervalWithIntIntervalValues
@@ -1847,9 +1842,7 @@ def putmask(self, mask, new, using_cow: bool = False) -> list[Block]:
try:
# Caller is responsible for ensuring matching lengths
values._putmask(mask, new)
- except (TypeError, ValueError) as err:
- _catch_deprecated_value_error(err)
-
+ except (TypeError, ValueError):
if self.ndim == 1 or self.shape[0] == 1:
if isinstance(self.dtype, IntervalDtype):
# Discussion about what we want to support in the general
@@ -2256,19 +2249,6 @@ def is_view(self) -> bool:
return self.values._ndarray.base is not None
-def _catch_deprecated_value_error(err: Exception) -> None:
- """
- We catch ValueError for now, but only a specific one raised by DatetimeArray
- which will no longer be raised in version 2.0.
- """
- if isinstance(err, ValueError):
- if isinstance(err, IncompatibleFrequency):
- pass
- elif "'value.closed' is" in str(err):
- # IntervalDtype mismatched 'closed'
- pass
-
-
class DatetimeLikeBlock(NDArrayBackedExtensionBlock):
"""Block for datetime64[ns], timedelta64[ns]."""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 564c799d7ab66..9a934217ed5c1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3985,6 +3985,8 @@ def argsort(
mask = isna(values)
if mask.any():
+ # TODO(3.0): once this deprecation is enforced we can call
+ # self.array.argsort directly, which will close GH#43840
warnings.warn(
"The behavior of Series.argsort in the presence of NA values is "
"deprecated. In a future version, NA values will be ordered "
@@ -5199,6 +5201,7 @@ def info(
show_counts=show_counts,
)
+ # TODO(3.0): this can be removed once GH#33302 deprecation is enforced
def _replace_single(self, to_replace, method: str, inplace: bool, limit):
"""
Replaces values in a Series using the fill method specified when no
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index de93e89ecacd5..1476ef87f4666 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -94,6 +94,14 @@ def test_astype_roundtrip(dtype):
result = casted.astype("datetime64[ns]")
tm.assert_series_equal(result, ser)
+ # GH#38509 same thing for timedelta64
+ ser2 = ser - ser.iloc[-1]
+ casted2 = ser2.astype(dtype)
+ assert is_dtype_equal(casted2.dtype, dtype)
+
+ result2 = casted2.astype(ser2.dtype)
+ tm.assert_series_equal(result2, ser2)
+
def test_add(dtype):
a = pd.Series(["a", "b", "c", None, None], dtype=dtype)
diff --git a/pandas/tests/extension/base/reduce.py b/pandas/tests/extension/base/reduce.py
index 9b56b10681e15..4edbcacffe6af 100644
--- a/pandas/tests/extension/base/reduce.py
+++ b/pandas/tests/extension/base/reduce.py
@@ -83,6 +83,7 @@ def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
ser = pd.Series(data)
if not self._supports_reduction(ser, op_name):
+ # TODO: the message being checked here isn't actually checking anything
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
"does not support reduction|"
@@ -101,6 +102,7 @@ def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
ser = pd.Series(data)
if not self._supports_reduction(ser, op_name):
+ # TODO: the message being checked here isn't actually checking anything
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
"does not support reduction|"
diff --git a/pandas/tests/extension/date/array.py b/pandas/tests/extension/date/array.py
index 39accd6d223a7..2306f5974ba18 100644
--- a/pandas/tests/extension/date/array.py
+++ b/pandas/tests/extension/date/array.py
@@ -176,9 +176,13 @@ def isna(self) -> np.ndarray:
@classmethod
def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy=False):
if isinstance(scalars, dt.date):
- pass
+ raise TypeError
elif isinstance(scalars, DateArray):
- pass
+ if dtype is not None:
+ return scalars.astype(dtype, copy=copy)
+ if copy:
+ return scalars.copy()
+ return scalars[:]
elif isinstance(scalars, np.ndarray):
scalars = scalars.astype("U10") # 10 chars for yyyy-mm-dd
return DateArray(scalars)
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 01448a2f83f75..7330e03a57daf 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -220,8 +220,10 @@ def test_fillna_no_op_returns_copy(self, data, request):
super().test_fillna_no_op_returns_copy(data)
@pytest.mark.xfail(reason="Unsupported")
- def test_fillna_series(self):
+ def test_fillna_series(self, data_missing):
# this one looks doable.
+ # TODO: this fails bc we do not pass through data_missing. If we did,
+ # the 0-fill case would xpass
super().test_fillna_series()
def test_fillna_frame(self, data_missing):
@@ -349,7 +351,9 @@ def test_map_raises(self, data, na_action):
class TestCasting(BaseSparseTests, base.BaseCastingTests):
@pytest.mark.xfail(raises=TypeError, reason="no sparse StringDtype")
- def test_astype_string(self, data):
+ def test_astype_string(self, data, nullable_string_dtype):
+ # TODO: this fails bc we do not pass through nullable_string_dtype;
+ # If we did, the 0-cases would xpass
super().test_astype_string(data)
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index 8268de9a47f11..00e5e411bd0ed 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -201,7 +201,7 @@ def test_groupby_extension_apply(self, data_for_grouping, groupby_apply_op):
class Test2DCompat(base.Dim2CompatTests):
@pytest.fixture(autouse=True)
- def arrow_not_supported(self, data, request):
+ def arrow_not_supported(self, data):
if isinstance(data, ArrowStringArray):
pytest.skip(reason="2D support not implemented for ArrowStringArray")
diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py
index 2ab1cd13a31d8..bce7d2d554004 100644
--- a/pandas/tests/series/methods/test_reindex.py
+++ b/pandas/tests/series/methods/test_reindex.py
@@ -25,12 +25,7 @@
def test_reindex(datetime_series, string_series):
identity = string_series.reindex(string_series.index)
- # __array_interface__ is not defined for older numpies
- # and on some pythons
- try:
- assert np.may_share_memory(string_series.index, identity.index)
- except AttributeError:
- pass
+ assert np.may_share_memory(string_series.index, identity.index)
assert identity.index.is_(string_series.index)
assert identity.index.identical(string_series.index)
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 80fd2fd7c0a06..44121cb5f784f 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -777,7 +777,7 @@ class TestNamePreservation:
@pytest.mark.parametrize("box", [list, tuple, np.array, Index, Series, pd.array])
@pytest.mark.parametrize("flex", [True, False])
def test_series_ops_name_retention(self, flex, box, names, all_binary_operators):
- # GH#33930 consistent name renteiton
+ # GH#33930 consistent name-retention
op = all_binary_operators
left = Series(range(10), name=names[0])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54673 | 2023-08-21T21:04:15Z | 2023-08-22T18:43:01Z | 2023-08-22T18:43:01Z | 2023-08-22T19:19:29Z |
Backport PR #54496 on branch 2.1.x (Fix inference for fixed with numpy strings with arrow string option) | diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 5757c69bb6ec7..f733ba3b445fd 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -19,6 +19,8 @@
import numpy as np
from numpy import ma
+from pandas._config import using_pyarrow_string_dtype
+
from pandas._libs import lib
from pandas._libs.tslibs import (
Period,
@@ -49,7 +51,10 @@
is_object_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.dtypes import NumpyEADtype
+from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
+ NumpyEADtype,
+)
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCExtensionArray,
@@ -589,6 +594,11 @@ def sanitize_array(
subarr = data
if data.dtype == object:
subarr = maybe_infer_to_datetimelike(data)
+ elif data.dtype.kind == "U" and using_pyarrow_string_dtype():
+ import pyarrow as pa
+
+ dtype = ArrowDtype(pa.string())
+ subarr = dtype.construct_array_type()._from_sequence(data, dtype=dtype)
if subarr is data and copy:
subarr = subarr.copy()
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 2290cd86f35e6..8879d3318f7ca 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -13,6 +13,8 @@
import numpy as np
from numpy import ma
+from pandas._config import using_pyarrow_string_dtype
+
from pandas._libs import lib
from pandas.core.dtypes.astype import astype_is_view
@@ -30,7 +32,10 @@
is_named_tuple,
is_object_dtype,
)
-from pandas.core.dtypes.dtypes import ExtensionDtype
+from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
+ ExtensionDtype,
+)
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCSeries,
@@ -65,6 +70,7 @@
from pandas.core.internals.blocks import (
BlockPlacement,
ensure_block_shape,
+ new_block,
new_block_2d,
)
from pandas.core.internals.managers import (
@@ -372,6 +378,20 @@ def ndarray_to_mgr(
bp = BlockPlacement(slice(len(columns)))
nb = new_block_2d(values, placement=bp, refs=refs)
block_values = [nb]
+ elif dtype is None and values.dtype.kind == "U" and using_pyarrow_string_dtype():
+ import pyarrow as pa
+
+ obj_columns = list(values)
+ dtype = ArrowDtype(pa.string())
+ block_values = [
+ new_block(
+ dtype.construct_array_type()._from_sequence(data, dtype=dtype),
+ BlockPlacement(slice(i, i + 1)),
+ ndim=1,
+ )
+ for i, data in enumerate(obj_columns)
+ ]
+
else:
bp = BlockPlacement(slice(len(columns)))
nb = new_block_2d(values, placement=bp, refs=refs)
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index c170704150383..63cddb7f192e6 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2718,6 +2718,31 @@ def test_frame_string_inference(self):
df = DataFrame({"a": ["a", "b"]}, dtype="object")
tm.assert_frame_equal(df, expected)
+ def test_frame_string_inference_array_string_dtype(self):
+ # GH#54496
+ pa = pytest.importorskip("pyarrow")
+ dtype = pd.ArrowDtype(pa.string())
+ expected = DataFrame(
+ {"a": ["a", "b"]}, dtype=dtype, columns=Index(["a"], dtype=dtype)
+ )
+ with pd.option_context("future.infer_string", True):
+ df = DataFrame({"a": np.array(["a", "b"])})
+ tm.assert_frame_equal(df, expected)
+
+ expected = DataFrame({0: ["a", "b"], 1: ["c", "d"]}, dtype=dtype)
+ with pd.option_context("future.infer_string", True):
+ df = DataFrame(np.array([["a", "c"], ["b", "d"]]))
+ tm.assert_frame_equal(df, expected)
+
+ expected = DataFrame(
+ {"a": ["a", "b"], "b": ["c", "d"]},
+ dtype=dtype,
+ columns=Index(["a", "b"], dtype=dtype),
+ )
+ with pd.option_context("future.infer_string", True):
+ df = DataFrame(np.array([["a", "c"], ["b", "d"]]), columns=["a", "b"])
+ tm.assert_frame_equal(df, expected)
+
class TestDataFrameConstructorIndexInference:
def test_frame_from_dict_of_series_overlapping_monthly_period_indexes(self):
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 611f4a7f790a6..97bd8633954d8 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -2107,6 +2107,14 @@ def test_series_string_inference_scalar(self):
ser = Series("a", index=[1])
tm.assert_series_equal(ser, expected)
+ def test_series_string_inference_array_string_dtype(self):
+ # GH#54496
+ pa = pytest.importorskip("pyarrow")
+ expected = Series(["a", "b"], dtype=pd.ArrowDtype(pa.string()))
+ with pd.option_context("future.infer_string", True):
+ ser = Series(np.array(["a", "b"]))
+ tm.assert_series_equal(ser, expected)
+
class TestSeriesConstructorIndexCoercion:
def test_series_constructor_datetimelike_index_coercion(self):
| Backport PR #54496: Fix inference for fixed with numpy strings with arrow string option | https://api.github.com/repos/pandas-dev/pandas/pulls/54672 | 2023-08-21T19:59:34Z | 2023-08-21T23:06:13Z | 2023-08-21T23:06:13Z | 2023-08-21T23:06:14Z |
BUG: drop_duplicates raising for boolean arrow dtype with missing values | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..bff026d27dbce 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -626,6 +626,7 @@ Performance improvements
- Performance improvement in :meth:`DataFrame.transpose` when transposing a DataFrame with a single masked dtype, e.g. :class:`Int64` (:issue:`52836`)
- Performance improvement in :meth:`Series.add` for PyArrow string and binary dtypes (:issue:`53150`)
- Performance improvement in :meth:`Series.corr` and :meth:`Series.cov` for extension dtypes (:issue:`52502`)
+- Performance improvement in :meth:`Series.drop_duplicates` for ``ArrowDtype`` (:issue:`54667`).
- Performance improvement in :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, :meth:`DataFrame.bfill` with PyArrow dtypes (:issue:`53950`)
- Performance improvement in :meth:`Series.str.get_dummies` for PyArrow-backed strings (:issue:`53655`)
- Performance improvement in :meth:`Series.str.get` for PyArrow-backed strings (:issue:`53152`)
@@ -830,6 +831,7 @@ ExtensionArray
- Bug in :class:`~arrays.ArrowExtensionArray` converting pandas non-nanosecond temporal objects from non-zero values to zero values (:issue:`53171`)
- Bug in :meth:`Series.quantile` for PyArrow temporal types raising ``ArrowInvalid`` (:issue:`52678`)
- Bug in :meth:`Series.rank` returning wrong order for small values with ``Float64`` dtype (:issue:`52471`)
+- Bug in :meth:`Series.unique` for boolean ``ArrowDtype`` with ``NA`` values (:issue:`54667`)
- Bug in :meth:`~arrays.ArrowExtensionArray.__iter__` and :meth:`~arrays.ArrowExtensionArray.__getitem__` returning python datetime and timedelta objects for non-nano dtypes (:issue:`53326`)
- Bug where the :class:`DataFrame` repr would not work when a column had an :class:`ArrowDtype` with a ``pyarrow.ExtensionDtype`` (:issue:`54063`)
- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes (e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept PyArrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 14dee202a9d8d..06da747a450ee 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -55,6 +55,7 @@
)
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
BaseMaskedDtype,
CategoricalDtype,
ExtensionDtype,
@@ -996,9 +997,13 @@ def duplicated(
-------
duplicated : ndarray[bool]
"""
- if hasattr(values, "dtype") and isinstance(values.dtype, BaseMaskedDtype):
- values = cast("BaseMaskedArray", values)
- return htable.duplicated(values._data, keep=keep, mask=values._mask)
+ if hasattr(values, "dtype"):
+ if isinstance(values.dtype, ArrowDtype):
+ values = values._to_masked() # type: ignore[union-attr]
+
+ if isinstance(values.dtype, BaseMaskedDtype):
+ values = cast("BaseMaskedArray", values)
+ return htable.duplicated(values._data, keep=keep, mask=values._mask)
values = _ensure_data(values)
return htable.duplicated(values, keep=keep)
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 96c2e1ba6d9bb..324ab1204e16e 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -249,3 +249,10 @@ def test_drop_duplicates_ignore_index(self):
result = ser.drop_duplicates(ignore_index=True)
expected = Series([1, 2, 3])
tm.assert_series_equal(result, expected)
+
+ def test_duplicated_arrow_dtype(self):
+ pytest.importorskip("pyarrow")
+ ser = Series([True, False, None, False], dtype="bool[pyarrow]")
+ result = ser.drop_duplicates()
+ expected = Series([True, False, None], dtype="bool[pyarrow]")
+ tm.assert_series_equal(result, expected)
| - [x] closes #54667 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @mroeschke
More generally, how do we want to proceed with Arrow EA fixes after 2.1 is out? | https://api.github.com/repos/pandas-dev/pandas/pulls/54670 | 2023-08-21T19:33:16Z | 2023-08-22T18:48:23Z | 2023-08-22T18:48:23Z | 2023-08-22T18:50:31Z |
Backport PR #54641 on branch 2.1.x (BUG: getitem indexing wrong axis) | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index ca0b1705b5c38..a2871f364f092 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -985,8 +985,9 @@ def _getitem_tuple_same_dim(self, tup: tuple):
"""
retval = self.obj
# Selecting columns before rows is signficiantly faster
+ start_val = (self.ndim - len(tup)) + 1
for i, key in enumerate(reversed(tup)):
- i = self.ndim - i - 1
+ i = self.ndim - i - start_val
if com.is_null_slice(key):
continue
diff --git a/pandas/tests/frame/indexing/test_getitem.py b/pandas/tests/frame/indexing/test_getitem.py
index 9fed2116b2896..9d9324f557c8d 100644
--- a/pandas/tests/frame/indexing/test_getitem.py
+++ b/pandas/tests/frame/indexing/test_getitem.py
@@ -458,6 +458,14 @@ def test_getitem_datetime_slice(self):
):
df["2011-01-01":"2011-11-01"]
+ def test_getitem_slice_same_dim_only_one_axis(self):
+ # GH#54622
+ df = DataFrame(np.random.default_rng(2).standard_normal((10, 8)))
+ result = df.iloc[(slice(None, None, 2),)]
+ assert result.shape == (5, 8)
+ expected = df.iloc[slice(None, None, 2), slice(None)]
+ tm.assert_frame_equal(result, expected)
+
class TestGetitemDeprecatedIndexers:
@pytest.mark.parametrize("key", [{"a", "b"}, {"a": "a"}])
| Backport PR #54641: BUG: getitem indexing wrong axis | https://api.github.com/repos/pandas-dev/pandas/pulls/54669 | 2023-08-21T18:44:31Z | 2023-08-21T20:25:03Z | 2023-08-21T20:25:03Z | 2023-08-21T20:25:03Z |
ENH: numba engine in df.apply | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index a795514aa31f8..3a4f9576eb135 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -28,7 +28,7 @@ enhancement2
Other enhancements
^^^^^^^^^^^^^^^^^^
--
+- DataFrame.apply now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/_numba/executor.py b/pandas/core/_numba/executor.py
index 5cd4779907146..0a26acb7df60a 100644
--- a/pandas/core/_numba/executor.py
+++ b/pandas/core/_numba/executor.py
@@ -15,6 +15,45 @@
from pandas.compat._optional import import_optional_dependency
+@functools.cache
+def generate_apply_looper(func, nopython=True, nogil=True, parallel=False):
+ if TYPE_CHECKING:
+ import numba
+ else:
+ numba = import_optional_dependency("numba")
+ nb_compat_func = numba.extending.register_jitable(func)
+
+ @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel)
+ def nb_looper(values, axis):
+ # Operate on the first row/col in order to get
+ # the output shape
+ if axis == 0:
+ first_elem = values[:, 0]
+ dim0 = values.shape[1]
+ else:
+ first_elem = values[0]
+ dim0 = values.shape[0]
+ res0 = nb_compat_func(first_elem)
+ # Use np.asarray to get shape for
+ # https://github.com/numba/numba/issues/4202#issuecomment-1185981507
+ buf_shape = (dim0,) + np.atleast_1d(np.asarray(res0)).shape
+ if axis == 0:
+ buf_shape = buf_shape[::-1]
+ buff = np.empty(buf_shape)
+
+ if axis == 1:
+ buff[0] = res0
+ for i in numba.prange(1, values.shape[0]):
+ buff[i] = nb_compat_func(values[i])
+ else:
+ buff[:, 0] = res0
+ for j in numba.prange(1, values.shape[1]):
+ buff[:, j] = nb_compat_func(values[:, j])
+ return buff
+
+ return nb_looper
+
+
@functools.cache
def make_looper(func, result_dtype, is_grouped_kernel, nopython, nogil, parallel):
if TYPE_CHECKING:
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 26467a4a982fa..78d52ed262c7a 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -49,6 +49,7 @@
ABCSeries,
)
+from pandas.core._numba.executor import generate_apply_looper
import pandas.core.common as com
from pandas.core.construction import ensure_wrapped_if_datetimelike
@@ -80,6 +81,8 @@ def frame_apply(
raw: bool = False,
result_type: str | None = None,
by_row: Literal[False, "compat"] = "compat",
+ engine: str = "python",
+ engine_kwargs: dict[str, bool] | None = None,
args=None,
kwargs=None,
) -> FrameApply:
@@ -100,6 +103,8 @@ def frame_apply(
raw=raw,
result_type=result_type,
by_row=by_row,
+ engine=engine,
+ engine_kwargs=engine_kwargs,
args=args,
kwargs=kwargs,
)
@@ -756,11 +761,15 @@ def __init__(
result_type: str | None,
*,
by_row: Literal[False, "compat"] = False,
+ engine: str = "python",
+ engine_kwargs: dict[str, bool] | None = None,
args,
kwargs,
) -> None:
if by_row is not False and by_row != "compat":
raise ValueError(f"by_row={by_row} not allowed")
+ self.engine = engine
+ self.engine_kwargs = engine_kwargs
super().__init__(
obj, func, raw, result_type, by_row=by_row, args=args, kwargs=kwargs
)
@@ -805,6 +814,12 @@ def values(self):
def apply(self) -> DataFrame | Series:
"""compute the results"""
+
+ if self.engine == "numba" and not self.raw:
+ raise ValueError(
+ "The numba engine in DataFrame.apply can only be used when raw=True"
+ )
+
# dispatch to handle list-like or dict-like
if is_list_like(self.func):
return self.apply_list_or_dict_like()
@@ -834,7 +849,7 @@ def apply(self) -> DataFrame | Series:
# raw
elif self.raw:
- return self.apply_raw()
+ return self.apply_raw(engine=self.engine, engine_kwargs=self.engine_kwargs)
return self.apply_standard()
@@ -907,7 +922,7 @@ def apply_empty_result(self):
else:
return self.obj.copy()
- def apply_raw(self):
+ def apply_raw(self, engine="python", engine_kwargs=None):
"""apply to the values as a numpy array"""
def wrap_function(func):
@@ -925,7 +940,23 @@ def wrapper(*args, **kwargs):
return wrapper
- result = np.apply_along_axis(wrap_function(self.func), self.axis, self.values)
+ if engine == "numba":
+ engine_kwargs = {} if engine_kwargs is None else engine_kwargs
+
+ # error: Argument 1 to "__call__" of "_lru_cache_wrapper" has
+ # incompatible type "Callable[..., Any] | str | list[Callable
+ # [..., Any] | str] | dict[Hashable,Callable[..., Any] | str |
+ # list[Callable[..., Any] | str]]"; expected "Hashable"
+ nb_looper = generate_apply_looper(
+ self.func, **engine_kwargs # type: ignore[arg-type]
+ )
+ result = nb_looper(self.values, self.axis)
+ # If we made the result 2-D, squeeze it back to 1-D
+ result = np.squeeze(result)
+ else:
+ result = np.apply_along_axis(
+ wrap_function(self.func), self.axis, self.values
+ )
# TODO: mixed type case
if result.ndim == 2:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f1fc63bc4b1ea..8fcb91c846826 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9925,6 +9925,8 @@ def apply(
result_type: Literal["expand", "reduce", "broadcast"] | None = None,
args=(),
by_row: Literal[False, "compat"] = "compat",
+ engine: Literal["python", "numba"] = "python",
+ engine_kwargs: dict[str, bool] | None = None,
**kwargs,
):
"""
@@ -9984,6 +9986,35 @@ def apply(
If False, the funcs will be passed the whole Series at once.
.. versionadded:: 2.1.0
+
+ engine : {'python', 'numba'}, default 'python'
+ Choose between the python (default) engine or the numba engine in apply.
+
+ The numba engine will attempt to JIT compile the passed function,
+ which may result in speedups for large DataFrames.
+ It also supports the following engine_kwargs :
+
+ - nopython (compile the function in nopython mode)
+ - nogil (release the GIL inside the JIT compiled function)
+ - parallel (try to apply the function in parallel over the DataFrame)
+
+ Note: The numba compiler only supports a subset of
+ valid Python/numpy operations.
+
+ Please read more about the `supported python features
+ <https://numba.pydata.org/numba-doc/dev/reference/pysupported.html>`_
+ and `supported numpy features
+ <https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html>`_
+ in numba to learn what you can or cannot use in the passed function.
+
+ As of right now, the numba engine can only be used with raw=True.
+
+ .. versionadded:: 2.2.0
+
+ engine_kwargs : dict
+ Pass keyword arguments to the engine.
+ This is currently only used by the numba engine,
+ see the documentation for the engine argument for more information.
**kwargs
Additional keyword arguments to pass as keywords arguments to
`func`.
@@ -10084,6 +10115,8 @@ def apply(
raw=raw,
result_type=result_type,
by_row=by_row,
+ engine=engine,
+ engine_kwargs=engine_kwargs,
args=args,
kwargs=kwargs,
)
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 3a3f73a68374b..3f2accc23e2d6 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -18,6 +18,13 @@
from pandas.tests.frame.common import zip_frames
+@pytest.fixture(params=["python", "numba"])
+def engine(request):
+ if request.param == "numba":
+ pytest.importorskip("numba")
+ return request.param
+
+
def test_apply(float_frame):
with np.errstate(all="ignore"):
# ufunc
@@ -234,36 +241,42 @@ def test_apply_broadcast_series_lambda_func(int_frame_const_col):
@pytest.mark.parametrize("axis", [0, 1])
-def test_apply_raw_float_frame(float_frame, axis):
+def test_apply_raw_float_frame(float_frame, axis, engine):
+ if engine == "numba":
+ pytest.skip("numba can't handle when UDF returns None.")
+
def _assert_raw(x):
assert isinstance(x, np.ndarray)
assert x.ndim == 1
- float_frame.apply(_assert_raw, axis=axis, raw=True)
+ float_frame.apply(_assert_raw, axis=axis, engine=engine, raw=True)
@pytest.mark.parametrize("axis", [0, 1])
-def test_apply_raw_float_frame_lambda(float_frame, axis):
- result = float_frame.apply(np.mean, axis=axis, raw=True)
+def test_apply_raw_float_frame_lambda(float_frame, axis, engine):
+ result = float_frame.apply(np.mean, axis=axis, engine=engine, raw=True)
expected = float_frame.apply(lambda x: x.values.mean(), axis=axis)
tm.assert_series_equal(result, expected)
-def test_apply_raw_float_frame_no_reduction(float_frame):
+def test_apply_raw_float_frame_no_reduction(float_frame, engine):
# no reduction
- result = float_frame.apply(lambda x: x * 2, raw=True)
+ result = float_frame.apply(lambda x: x * 2, engine=engine, raw=True)
expected = float_frame * 2
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("axis", [0, 1])
-def test_apply_raw_mixed_type_frame(mixed_type_frame, axis):
+def test_apply_raw_mixed_type_frame(mixed_type_frame, axis, engine):
+ if engine == "numba":
+ pytest.skip("isinstance check doesn't work with numba")
+
def _assert_raw(x):
assert isinstance(x, np.ndarray)
assert x.ndim == 1
# Mixed dtype (GH-32423)
- mixed_type_frame.apply(_assert_raw, axis=axis, raw=True)
+ mixed_type_frame.apply(_assert_raw, axis=axis, engine=engine, raw=True)
def test_apply_axis1(float_frame):
@@ -300,14 +313,20 @@ def test_apply_mixed_dtype_corner_indexing():
)
@pytest.mark.parametrize("raw", [True, False])
@pytest.mark.parametrize("axis", [0, 1])
-def test_apply_empty_infer_type(ax, func, raw, axis):
+def test_apply_empty_infer_type(ax, func, raw, axis, engine, request):
df = DataFrame(**{ax: ["a", "b", "c"]})
with np.errstate(all="ignore"):
test_res = func(np.array([], dtype="f8"))
is_reduction = not isinstance(test_res, np.ndarray)
- result = df.apply(func, axis=axis, raw=raw)
+ if engine == "numba" and raw is False:
+ mark = pytest.mark.xfail(
+ reason="numba engine only supports raw=True at the moment"
+ )
+ request.node.add_marker(mark)
+
+ result = df.apply(func, axis=axis, engine=engine, raw=raw)
if is_reduction:
agg_axis = df._get_agg_axis(axis)
assert isinstance(result, Series)
@@ -607,8 +626,10 @@ def non_reducing_function(row):
assert names == list(df.index)
-def test_apply_raw_function_runs_once():
+def test_apply_raw_function_runs_once(engine):
# https://github.com/pandas-dev/pandas/issues/34506
+ if engine == "numba":
+ pytest.skip("appending to list outside of numba func is not supported")
df = DataFrame({"a": [1, 2, 3]})
values = [] # Save row values function is applied to
@@ -623,7 +644,7 @@ def non_reducing_function(row):
for func in [reducing_function, non_reducing_function]:
del values[:]
- df.apply(func, raw=True, axis=1)
+ df.apply(func, engine=engine, raw=True, axis=1)
assert values == list(df.a.to_list())
@@ -1449,10 +1470,12 @@ def test_apply_no_suffix_index():
tm.assert_frame_equal(result, expected)
-def test_apply_raw_returns_string():
+def test_apply_raw_returns_string(engine):
# https://github.com/pandas-dev/pandas/issues/35940
+ if engine == "numba":
+ pytest.skip("No object dtype support in numba")
df = DataFrame({"A": ["aa", "bbb"]})
- result = df.apply(lambda x: x[0], axis=1, raw=True)
+ result = df.apply(lambda x: x[0], engine=engine, axis=1, raw=True)
expected = Series(["aa", "bbb"])
tm.assert_series_equal(result, expected)
@@ -1632,3 +1655,14 @@ def test_agg_dist_like_and_nonunique_columns():
result = df.agg({"A": "count"})
expected = df["A"].count()
tm.assert_series_equal(result, expected)
+
+
+def test_numba_unsupported():
+ df = DataFrame(
+ {"A": [None, 2, 3], "B": [1.0, np.nan, 3.0], "C": ["foo", None, "bar"]}
+ )
+ with pytest.raises(
+ ValueError,
+ match="The numba engine in DataFrame.apply can only be used when raw=True",
+ ):
+ df.apply(lambda x: x, engine="numba", raw=False)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54666 | 2023-08-21T17:33:45Z | 2023-09-11T16:34:04Z | 2023-09-11T16:34:04Z | 2023-09-11T19:25:46Z |
REF: use can_use_libjoin more consistently, docstring for it | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ee36a3515c4b3..96ff25a6bc423 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -373,9 +373,6 @@ def _left_indexer_unique(self, other: Self) -> npt.NDArray[np.intp]:
# Caller is responsible for ensuring other.dtype == self.dtype
sv = self._get_join_target()
ov = other._get_join_target()
- # can_use_libjoin assures sv and ov are ndarrays
- sv = cast(np.ndarray, sv)
- ov = cast(np.ndarray, ov)
# similar but not identical to ov.searchsorted(sv)
return libjoin.left_join_indexer_unique(sv, ov)
@@ -386,9 +383,6 @@ def _left_indexer(
# Caller is responsible for ensuring other.dtype == self.dtype
sv = self._get_join_target()
ov = other._get_join_target()
- # can_use_libjoin assures sv and ov are ndarrays
- sv = cast(np.ndarray, sv)
- ov = cast(np.ndarray, ov)
joined_ndarray, lidx, ridx = libjoin.left_join_indexer(sv, ov)
joined = self._from_join_target(joined_ndarray)
return joined, lidx, ridx
@@ -400,9 +394,6 @@ def _inner_indexer(
# Caller is responsible for ensuring other.dtype == self.dtype
sv = self._get_join_target()
ov = other._get_join_target()
- # can_use_libjoin assures sv and ov are ndarrays
- sv = cast(np.ndarray, sv)
- ov = cast(np.ndarray, ov)
joined_ndarray, lidx, ridx = libjoin.inner_join_indexer(sv, ov)
joined = self._from_join_target(joined_ndarray)
return joined, lidx, ridx
@@ -414,9 +405,6 @@ def _outer_indexer(
# Caller is responsible for ensuring other.dtype == self.dtype
sv = self._get_join_target()
ov = other._get_join_target()
- # can_use_libjoin assures sv and ov are ndarrays
- sv = cast(np.ndarray, sv)
- ov = cast(np.ndarray, ov)
joined_ndarray, lidx, ridx = libjoin.outer_join_indexer(sv, ov)
joined = self._from_join_target(joined_ndarray)
return joined, lidx, ridx
@@ -3354,6 +3342,7 @@ def _union(self, other: Index, sort: bool | None):
and other.is_monotonic_increasing
and not (self.has_duplicates and other.has_duplicates)
and self._can_use_libjoin
+ and other._can_use_libjoin
):
# Both are monotonic and at least one is unique, so can use outer join
# (actually don't need either unique, but without this restriction
@@ -3452,7 +3441,7 @@ def intersection(self, other, sort: bool = False):
self, other = self._dti_setop_align_tzs(other, "intersection")
if self.equals(other):
- if self.has_duplicates:
+ if not self.is_unique:
result = self.unique()._get_reconciled_name_object(other)
else:
result = self._get_reconciled_name_object(other)
@@ -3507,7 +3496,9 @@ def _intersection(self, other: Index, sort: bool = False):
self.is_monotonic_increasing
and other.is_monotonic_increasing
and self._can_use_libjoin
+ and other._can_use_libjoin
and not isinstance(self, ABCMultiIndex)
+ and not isinstance(other, ABCMultiIndex)
):
try:
res_indexer, indexer, _ = self._inner_indexer(other)
@@ -4654,7 +4645,10 @@ def join(
return self._join_non_unique(other, how=how)
elif not self.is_unique or not other.is_unique:
if self.is_monotonic_increasing and other.is_monotonic_increasing:
- if not isinstance(self.dtype, IntervalDtype):
+ # Note: 2023-08-15 we *do* have tests that get here with
+ # Categorical, string[python] (can use libjoin)
+ # and Interval (cannot)
+ if self._can_use_libjoin and other._can_use_libjoin:
# otherwise we will fall through to _join_via_get_indexer
# GH#39133
# go through object dtype for ea till engine is supported properly
@@ -4666,6 +4660,7 @@ def join(
self.is_monotonic_increasing
and other.is_monotonic_increasing
and self._can_use_libjoin
+ and other._can_use_libjoin
and not isinstance(self, ABCMultiIndex)
and not isinstance(self.dtype, CategoricalDtype)
):
@@ -4970,6 +4965,7 @@ def _join_monotonic(
) -> tuple[Index, npt.NDArray[np.intp] | None, npt.NDArray[np.intp] | None]:
# We only get here with matching dtypes and both monotonic increasing
assert other.dtype == self.dtype
+ assert self._can_use_libjoin and other._can_use_libjoin
if self.equals(other):
# This is a convenient place for this check, but its correctness
@@ -5038,19 +5034,28 @@ def _wrap_joined_index(
name = get_op_result_name(self, other)
return self._constructor._with_infer(joined, name=name, dtype=self.dtype)
+ @final
@cache_readonly
def _can_use_libjoin(self) -> bool:
"""
- Whether we can use the fastpaths implement in _libs.join
+ Whether we can use the fastpaths implemented in _libs.join.
+
+ This is driven by whether (in monotonic increasing cases that are
+ guaranteed not to have NAs) we can convert to a np.ndarray without
+ making a copy. If we cannot, this negates the performance benefit
+ of using libjoin.
"""
if type(self) is Index:
# excludes EAs, but include masks, we get here with monotonic
# values only, meaning no NA
return (
isinstance(self.dtype, np.dtype)
- or isinstance(self.values, BaseMaskedArray)
- or isinstance(self._values, ArrowExtensionArray)
+ or isinstance(self._values, (ArrowExtensionArray, BaseMaskedArray))
+ or self.dtype == "string[python]"
)
+ # For IntervalIndex, the conversion to numpy converts
+ # to object dtype, which negates the performance benefit of libjoin
+ # TODO: exclude RangeIndex and MultiIndex as these also make copies?
return not isinstance(self.dtype, IntervalDtype)
# --------------------------------------------------------------------
@@ -5172,7 +5177,8 @@ def _get_engine_target(self) -> ArrayLike:
return self._values.astype(object)
return vals
- def _get_join_target(self) -> ArrayLike:
+ @final
+ def _get_join_target(self) -> np.ndarray:
"""
Get the ndarray or ExtensionArray that we can pass to the join
functions.
@@ -5184,7 +5190,13 @@ def _get_join_target(self) -> ArrayLike:
# This is only used if our array is monotonic, so no missing values
# present
return self._values.to_numpy()
- return self._get_engine_target()
+
+ # TODO: exclude ABCRangeIndex, ABCMultiIndex cases here as those create
+ # copies.
+ target = self._get_engine_target()
+ if not isinstance(target, np.ndarray):
+ raise ValueError("_can_use_libjoin should return False.")
+ return target
def _from_join_target(self, result: np.ndarray) -> ArrayLike:
"""
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54664 | 2023-08-21T16:34:06Z | 2023-08-21T18:46:32Z | 2023-08-21T18:46:32Z | 2023-08-21T18:46:38Z |
DEPR: BaseNoReduceTests | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 6fdffb4d78341..38c2956d1b137 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -100,7 +100,7 @@ Deprecations
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
- Deprecated not passing a tuple to :class:`DataFrameGroupBy.get_group` or :class:`SeriesGroupBy.get_group` when grouping by a length-1 list-like (:issue:`25971`)
-
+- Deprecated the extension test classes ``BaseNoReduceTests``, ``BaseBooleanReduceTests``, and ``BaseNumericReduceTests``, use ``BaseReduceTests`` instead (:issue:`54663`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/tests/extension/base/__init__.py b/pandas/tests/extension/base/__init__.py
index 7cd55b7240d54..82b61722f5e96 100644
--- a/pandas/tests/extension/base/__init__.py
+++ b/pandas/tests/extension/base/__init__.py
@@ -56,12 +56,7 @@ class TestMyDtype(BaseDtypeTests):
BaseUnaryOpsTests,
)
from pandas.tests.extension.base.printing import BasePrintingTests
-from pandas.tests.extension.base.reduce import ( # noqa: F401
- BaseBooleanReduceTests,
- BaseNoReduceTests,
- BaseNumericReduceTests,
- BaseReduceTests,
-)
+from pandas.tests.extension.base.reduce import BaseReduceTests
from pandas.tests.extension.base.reshaping import BaseReshapingTests
from pandas.tests.extension.base.setitem import BaseSetitemTests
@@ -92,3 +87,44 @@ class ExtensionTests(
BaseSetitemTests,
):
pass
+
+
+def __getattr__(name: str):
+ import warnings
+
+ if name == "BaseNoReduceTests":
+ warnings.warn(
+ "BaseNoReduceTests is deprecated and will be removed in a "
+ "future version. Use BaseReduceTests and override "
+ "`_supports_reduction` instead.",
+ FutureWarning,
+ )
+ from pandas.tests.extension.base.reduce import BaseNoReduceTests
+
+ return BaseNoReduceTests
+
+ elif name == "BaseNumericReduceTests":
+ warnings.warn(
+ "BaseNumericReduceTests is deprecated and will be removed in a "
+ "future version. Use BaseReduceTests and override "
+ "`_supports_reduction` instead.",
+ FutureWarning,
+ )
+ from pandas.tests.extension.base.reduce import BaseNumericReduceTests
+
+ return BaseNumericReduceTests
+
+ elif name == "BaseBooleanReduceTests":
+ warnings.warn(
+ "BaseBooleanReduceTests is deprecated and will be removed in a "
+ "future version. Use BaseReduceTests and override "
+ "`_supports_reduction` instead.",
+ FutureWarning,
+ )
+ from pandas.tests.extension.base.reduce import BaseBooleanReduceTests
+
+ return BaseBooleanReduceTests
+
+ raise AttributeError(
+ f"module 'pandas.tests.extension.base' has no attribute '{name}'"
+ )
diff --git a/pandas/tests/extension/base/reduce.py b/pandas/tests/extension/base/reduce.py
index 9b56b10681e15..3dd8caaa82ae2 100644
--- a/pandas/tests/extension/base/reduce.py
+++ b/pandas/tests/extension/base/reduce.py
@@ -129,7 +129,8 @@ def test_reduce_frame(self, data, all_numeric_reductions, skipna):
self.check_reduce_frame(ser, op_name, skipna)
-# TODO: deprecate BaseNoReduceTests, BaseNumericReduceTests, BaseBooleanReduceTests
+# TODO(3.0): remove BaseNoReduceTests, BaseNumericReduceTests,
+# BaseBooleanReduceTests
class BaseNoReduceTests(BaseReduceTests):
"""we don't define any reductions"""
| I'm really not interested in writing tests for the tests | https://api.github.com/repos/pandas-dev/pandas/pulls/54663 | 2023-08-21T15:39:59Z | 2023-08-22T16:56:14Z | 2023-08-22T16:56:14Z | 2023-08-22T19:20:00Z |
REF: remove is_simple_frame from json code | diff --git a/pandas/_libs/src/vendored/ujson/python/objToJSON.c b/pandas/_libs/src/vendored/ujson/python/objToJSON.c
index 4a22de886742c..30b940726af0a 100644
--- a/pandas/_libs/src/vendored/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/vendored/ujson/python/objToJSON.c
@@ -262,21 +262,6 @@ static Py_ssize_t get_attr_length(PyObject *obj, char *attr) {
return ret;
}
-static int is_simple_frame(PyObject *obj) {
- PyObject *mgr = PyObject_GetAttrString(obj, "_mgr");
- if (!mgr) {
- return 0;
- }
- int ret;
- if (PyObject_HasAttrString(mgr, "blocks")) {
- ret = (get_attr_length(mgr, "blocks") <= 1);
- } else {
- ret = 0;
- }
-
- Py_DECREF(mgr);
- return ret;
-}
static npy_int64 get_long_attr(PyObject *o, const char *attr) {
// NB we are implicitly assuming that o is a Timedelta or Timestamp, or NaT
@@ -1140,15 +1125,8 @@ int DataFrame_iterNext(JSOBJ obj, JSONTypeContext *tc) {
GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "index");
} else if (index == 2) {
memcpy(GET_TC(tc)->cStr, "data", sizeof(char) * 5);
- if (is_simple_frame(obj)) {
- GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "values");
- if (!GET_TC(tc)->itemValue) {
- return 0;
- }
- } else {
- Py_INCREF(obj);
- GET_TC(tc)->itemValue = obj;
- }
+ Py_INCREF(obj);
+ GET_TC(tc)->itemValue = obj;
} else {
return 0;
}
@@ -1756,22 +1734,10 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
return;
}
- if (is_simple_frame(obj)) {
- pc->iterBegin = NpyArr_iterBegin;
- pc->iterEnd = NpyArr_iterEnd;
- pc->iterNext = NpyArr_iterNext;
- pc->iterGetName = NpyArr_iterGetName;
-
- pc->newObj = PyObject_GetAttrString(obj, "values");
- if (!pc->newObj) {
- goto INVALID;
- }
- } else {
- pc->iterBegin = PdBlock_iterBegin;
- pc->iterEnd = PdBlock_iterEnd;
- pc->iterNext = PdBlock_iterNext;
- pc->iterGetName = PdBlock_iterGetName;
- }
+ pc->iterBegin = PdBlock_iterBegin;
+ pc->iterEnd = PdBlock_iterEnd;
+ pc->iterNext = PdBlock_iterNext;
+ pc->iterGetName = PdBlock_iterGetName;
pc->iterGetValue = NpyArr_iterGetValue;
if (enc->outputFormat == VALUES) {
| cc @WillAyd the simple_frame path is kludgy, I expect fragile to non-numpy blocks. This just rips it out to simplify the code. | https://api.github.com/repos/pandas-dev/pandas/pulls/54662 | 2023-08-21T15:21:05Z | 2023-08-21T20:26:51Z | 2023-08-21T20:26:51Z | 2023-08-21T20:28:27Z |
Backport PR #54586 on branch 2.1.x (REF: Refactor conversion of na value) | diff --git a/pandas/tests/strings/__init__.py b/pandas/tests/strings/__init__.py
index 9a7622b4f1cd8..496a2d095d85b 100644
--- a/pandas/tests/strings/__init__.py
+++ b/pandas/tests/strings/__init__.py
@@ -1,2 +1,12 @@
# Needed for new arrow string dtype
+
+import pandas as pd
+
object_pyarrow_numpy = ("object",)
+
+
+def _convert_na_value(ser, expected):
+ if ser.dtype != object:
+ # GH#18463
+ expected = expected.fillna(pd.NA)
+ return expected
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index bcb8db96b37fa..d5017b1c47d85 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -11,7 +11,10 @@
Series,
_testing as tm,
)
-from pandas.tests.strings import object_pyarrow_numpy
+from pandas.tests.strings import (
+ _convert_na_value,
+ object_pyarrow_numpy,
+)
# --------------------------------------------------------------------------------------
# str.contains
@@ -758,9 +761,7 @@ def test_findall(any_string_dtype):
ser = Series(["fooBAD__barBAD", np.nan, "foo", "BAD"], dtype=any_string_dtype)
result = ser.str.findall("BAD[_]*")
expected = Series([["BAD__", "BAD"], np.nan, [], ["BAD"]])
- if ser.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(ser, expected)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py
index 0298694ccaf71..7fabe238d2b86 100644
--- a/pandas/tests/strings/test_split_partition.py
+++ b/pandas/tests/strings/test_split_partition.py
@@ -12,6 +12,7 @@
Series,
_testing as tm,
)
+from pandas.tests.strings import _convert_na_value
@pytest.mark.parametrize("method", ["split", "rsplit"])
@@ -20,9 +21,7 @@ def test_split(any_string_dtype, method):
result = getattr(values.str, method)("_")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -32,9 +31,7 @@ def test_split_more_than_one_char(any_string_dtype, method):
values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"], dtype=any_string_dtype)
result = getattr(values.str, method)("__")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
result = getattr(values.str, method)("__", expand=False)
@@ -46,9 +43,7 @@ def test_split_more_regex_split(any_string_dtype):
values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype)
result = values.str.split("[,_]")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -128,9 +123,7 @@ def test_rsplit(any_string_dtype):
values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype)
result = values.str.rsplit("[,_]")
exp = Series([["a,b_c"], ["c_d,e"], np.nan, ["f,g,h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -139,9 +132,7 @@ def test_rsplit_max_number(any_string_dtype):
values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype)
result = values.str.rsplit("_", n=1)
exp = Series([["a_b", "c"], ["c_d", "e"], np.nan, ["f_g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -455,9 +446,7 @@ def test_partition_series_more_than_one_char(method, exp, any_string_dtype):
s = Series(["a__b__c", "c__d__e", np.nan, "f__g__h", None], dtype=any_string_dtype)
result = getattr(s.str, method)("__", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -480,9 +469,7 @@ def test_partition_series_none(any_string_dtype, method, exp):
s = Series(["a b c", "c d e", np.nan, "f g h", None], dtype=any_string_dtype)
result = getattr(s.str, method)(expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -505,9 +492,7 @@ def test_partition_series_not_split(any_string_dtype, method, exp):
s = Series(["abc", "cde", np.nan, "fgh", None], dtype=any_string_dtype)
result = getattr(s.str, method)("_", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -531,9 +516,7 @@ def test_partition_series_unicode(any_string_dtype, method, exp):
result = getattr(s.str, method)("_", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
| Backport PR #54586: REF: Refactor conversion of na value | https://api.github.com/repos/pandas-dev/pandas/pulls/54658 | 2023-08-21T09:18:37Z | 2023-08-21T17:57:50Z | 2023-08-21T17:57:50Z | 2023-08-21T17:57:50Z |
BUG: _apply_rule() ignores tz info on an empty list of observances | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index b90563ba43d83..e122d06c75a71 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -113,6 +113,7 @@ Performance improvements
Bug fixes
~~~~~~~~~
+- Bug in :class:`AbstractHolidayCalendar` where timezone data was not propagated when computing holiday observances (:issue:`54580`)
Categorical
^^^^^^^^^^^
diff --git a/pandas/tests/tseries/holiday/test_holiday.py b/pandas/tests/tseries/holiday/test_holiday.py
index ee83ca144d38a..e751339ca2cd0 100644
--- a/pandas/tests/tseries/holiday/test_holiday.py
+++ b/pandas/tests/tseries/holiday/test_holiday.py
@@ -3,7 +3,10 @@
import pytest
from pytz import utc
-from pandas import DatetimeIndex
+from pandas import (
+ DatetimeIndex,
+ Series,
+)
import pandas._testing as tm
from pandas.tseries.holiday import (
@@ -17,6 +20,7 @@
HolidayCalendarFactory,
Timestamp,
USColumbusDay,
+ USFederalHolidayCalendar,
USLaborDay,
USMartinLutherKingJr,
USMemorialDay,
@@ -311,3 +315,17 @@ class TestHolidayCalendar(AbstractHolidayCalendar):
tm.assert_index_equal(date_interval_low, expected_results)
tm.assert_index_equal(date_window_edge, expected_results)
tm.assert_index_equal(date_interval_high, expected_results)
+
+
+def test_holidays_with_timezone_specified_but_no_occurences():
+ # GH 54580
+ # _apply_rule() in holiday.py was silently dropping timezones if you passed it
+ # an empty list of holiday dates that had timezone information
+ start_date = Timestamp("2018-01-01", tz="America/Chicago")
+ end_date = Timestamp("2018-01-11", tz="America/Chicago")
+ test_case = USFederalHolidayCalendar().holidays(
+ start_date, end_date, return_name=True
+ )
+ expected_results = Series("New Year's Day", index=[start_date])
+
+ tm.assert_equal(test_case, expected_results)
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 44c21bc284121..6b40907c009d5 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -354,7 +354,7 @@ def _apply_rule(self, dates: DatetimeIndex) -> DatetimeIndex:
Dates with rules applied
"""
if dates.empty:
- return DatetimeIndex([])
+ return dates.copy()
if self.observance is not None:
return dates.map(lambda d: self.observance(d))
| - [x] closes #54580
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54655 | 2023-08-21T02:20:55Z | 2023-08-22T22:51:54Z | 2023-08-22T22:51:54Z | 2023-08-22T22:52:01Z |
DOC: fix build failure with sphinx 7 and SOURCE_DATE_EPOCH | diff --git a/doc/source/conf.py b/doc/source/conf.py
index accbff596b12d..86d2494707ce2 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -162,7 +162,7 @@
# General information about the project.
project = "pandas"
# We have our custom "pandas_footer.html" template, using copyright for the current year
-copyright = f"{datetime.now().year}"
+copyright = f"{datetime.now().year},"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
| When SOURCE_DATE_EPOCH is set (which is the default when building Debian packages), Sphinx 7 [**requires** a space or comma after the year in 'copyright'](https://sources.debian.org/src/sphinx/7.1.2-2/sphinx/config.py/#L444).
[Debian bug report, with failure log](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1042672).
- [not known to be reported here ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ n/a] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ CI says yes] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [n/a ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [no ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54653 | 2023-08-20T21:30:35Z | 2023-09-22T17:02:03Z | 2023-09-22T17:02:03Z | 2023-09-22T17:02:12Z |
Removes None type from max_info_rows | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 27e9bf8958ab0..1dc09388b698f 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -265,7 +265,7 @@ def use_numba_cb(key) -> None:
"""
pc_max_info_rows_doc = """
-: int or None
+: int
df.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
@@ -322,7 +322,7 @@ def is_terminal() -> bool:
"max_info_rows",
1690785,
pc_max_info_rows_doc,
- validator=is_instance_factory((int, type(None))),
+ validator=is_int,
)
cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int)
cf.register_option(
| `max_info_rows` doesn't support `None` type.
- [x] closes #54600 (Replace xxxx with the GitHub issue number) | https://api.github.com/repos/pandas-dev/pandas/pulls/54652 | 2023-08-20T18:42:53Z | 2023-08-22T16:36:39Z | 2023-08-22T16:36:39Z | 2023-08-22T16:36:45Z |
Updated documentation for pandas.DataFrame.map | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 282ecdcf31939..38e4ab0dc6df1 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -10129,7 +10129,12 @@ def map(
0 1
0 3 4
1 5 5
-
+
+ >>> df.map(round, digits=2)
+ 0 1
+ 0 1.00 2.12
+ 1 3.36 4.57
+
Like Series.map, NA values can be ignored:
>>> df_copy = df.copy()
| closes issue #54648 by adding another example of the map method using a standard func, along with additional keyword arguments. | https://api.github.com/repos/pandas-dev/pandas/pulls/54649 | 2023-08-20T16:36:23Z | 2023-09-18T17:10:28Z | null | 2023-09-18T17:10:28Z |
Add nan_count in Describe Functions with all conditions, made all request. | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7da41b890598d..139c23643b2ca 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11313,18 +11313,19 @@ def describe(
DataFrame.mean: Mean of the values.
DataFrame.std: Standard deviation of the observations.
DataFrame.select_dtypes: Subset of a DataFrame including/excluding
+ DataFrame.isna().sum(): Find the count of the NA/null values.
columns based on their dtype.
Notes
-----
For numeric data, the result's index will include ``count``,
- ``mean``, ``std``, ``min``, ``max`` as well as lower, ``50`` and
+ ``mean``, ``std``, ``min``, ``max`` ,``nan_count``, as well as lower, ``50`` and
upper percentiles. By default the lower percentile is ``25`` and the
upper percentile is ``75``. The ``50`` percentile is the
same as the median.
For object data (e.g. strings or timestamps), the result's index
- will include ``count``, ``unique``, ``top``, and ``freq``. The ``top``
+ will include ``count``, ``unique``, ``top``, ``freq`` and ``nan_count``. The ``top``
is the most common value. The ``freq`` is the most common value's
frequency. Timestamps also include the ``first`` and ``last`` items.
@@ -11347,26 +11348,28 @@ def describe(
--------
Describing a numeric ``Series``.
- >>> s = pd.Series([1, 2, 3])
+ >>> s = pd.Series([1, 2, 3, None])
>>> s.describe()
- count 3.0
- mean 2.0
- std 1.0
- min 1.0
- 25% 1.5
- 50% 2.0
- 75% 2.5
- max 3.0
+ count 3.0
+ mean 2.0
+ std 1.0
+ min 1.0
+ 25% 1.5
+ 50% 2.0
+ 75% 2.5
+ max 3.0
+ nan_count 1.0
dtype: float64
Describing a categorical ``Series``.
- >>> s = pd.Series(['a', 'a', 'b', 'c'])
+ >>> s = pd.Series(['a', 'a', 'b', 'c', None])
>>> s.describe()
- count 4
- unique 3
- top a
- freq 2
+ count 4
+ unique 3
+ top a
+ freq 2
+ nan_count 1
dtype: object
Describing a timestamp ``Series``.
@@ -11374,7 +11377,8 @@ def describe(
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
- ... np.datetime64("2010-01-01")
+ ... np.datetime64("2010-01-01"),
+ None
... ])
>>> s.describe()
count 3
@@ -11384,14 +11388,15 @@ def describe(
50% 2010-01-01 00:00:00
75% 2010-01-01 00:00:00
max 2010-01-01 00:00:00
+ nan_count 1
dtype: object
Describing a ``DataFrame``. By default only numeric fields
are returned.
- >>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']),
- ... 'numeric': [1, 2, 3],
- ... 'object': ['a', 'b', 'c']
+ >>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f',None]),
+ ... 'numeric': [1, 2, 3, None],
+ ... 'object': ['a', 'b', 'c', None]
... })
>>> df.describe()
numeric
@@ -11403,6 +11408,8 @@ def describe(
50% 2.0
75% 2.5
max 3.0
+ nan_count 1.0
+
Describing all columns of a ``DataFrame`` regardless of data type.
@@ -11419,19 +11426,21 @@ def describe(
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
+ nan_count 1 1.0 1
Describing a column from a ``DataFrame`` by accessing it as
an attribute.
>>> df.numeric.describe()
- count 3.0
- mean 2.0
- std 1.0
- min 1.0
- 25% 1.5
- 50% 2.0
- 75% 2.5
- max 3.0
+ count 3.0
+ mean 2.0
+ std 1.0
+ min 1.0
+ 25% 1.5
+ 50% 2.0
+ 75% 2.5
+ max 3.0
+ nan_count 1.0
Name: numeric, dtype: float64
Including only numeric columns in a ``DataFrame`` description.
@@ -11446,6 +11455,7 @@ def describe(
50% 2.0
75% 2.5
max 3.0
+ nan_count 1.0
Including only string columns in a ``DataFrame`` description.
@@ -11455,6 +11465,7 @@ def describe(
unique 3
top a
freq 1
+ nan_count 1
Including only categorical columns from a ``DataFrame`` description.
@@ -11464,6 +11475,7 @@ def describe(
unique 3
top d
freq 1
+ nan_count 1
Excluding numeric columns from a ``DataFrame`` description.
@@ -11473,6 +11485,7 @@ def describe(
unique 3 3
top f a
freq 1 1
+ nan_count 1 1
Excluding object columns from a ``DataFrame`` description.
@@ -11489,6 +11502,7 @@ def describe(
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
+ nan_count 1 1.0
"""
return describe_ndframe(
obj=self,
diff --git a/pandas/core/methods/describe.py b/pandas/core/methods/describe.py
index 5bb6bebd8a87b..eb4b9e3f3e31d 100644
--- a/pandas/core/methods/describe.py
+++ b/pandas/core/methods/describe.py
@@ -226,11 +226,11 @@ def describe_numeric_1d(series: Series, percentiles: Sequence[float]) -> Series:
formatted_percentiles = format_percentiles(percentiles)
- stat_index = ["count", "mean", "std", "min"] + formatted_percentiles + ["max"]
+ stat_index = ["count", "mean", "std", "min"] + formatted_percentiles + ["max"] +["nan_count"]
d = (
[series.count(), series.mean(), series.std(), series.min()]
+ series.quantile(percentiles).tolist()
- + [series.max()]
+ + [series.max()] + [series.isna().sum()]
)
# GH#48340 - always return float on non-complex numeric data
dtype: DtypeObj | None
@@ -266,7 +266,7 @@ def describe_categorical_1d(
percentiles_ignored : list-like of numbers
Ignored, but in place to unify interface.
"""
- names = ["count", "unique", "top", "freq"]
+ names = ["count", "unique", "top", "freq",'nan_count']
objcounts = data.value_counts()
count_unique = len(objcounts[objcounts != 0])
if count_unique > 0:
@@ -278,7 +278,7 @@ def describe_categorical_1d(
top, freq = np.nan, np.nan
dtype = "object"
- result = [data.count(), count_unique, top, freq]
+ result = [data.count(), count_unique, top, freq,data.isna().sum()]
from pandas import Series
@@ -313,12 +313,13 @@ def describe_timestamp_as_categorical_1d(
top = top.tz_convert(tz)
else:
top = top.tz_localize(tz)
- names += ["top", "freq", "first", "last"]
+ names += ["top", "freq", "first", "last","nan_count"]
result += [
top,
freq,
Timestamp(asint.min(), tz=tz),
Timestamp(asint.max(), tz=tz),
+ data.isna().sum()
]
# If the DataFrame is empty, set 'top' and 'freq' to None
@@ -348,11 +349,11 @@ def describe_timestamp_1d(data: Series, percentiles: Sequence[float]) -> Series:
formatted_percentiles = format_percentiles(percentiles)
- stat_index = ["count", "mean", "min"] + formatted_percentiles + ["max"]
+ stat_index = ["count", "mean", "min"] + formatted_percentiles + ["max"] + ["nan_count"]
d = (
[data.count(), data.mean(), data.min()]
+ data.quantile(percentiles).tolist()
- + [data.max()]
+ + [data.max()] + [data.isna().sum()]
)
return Series(d, index=stat_index, name=data.name)
| Hey,
I am Aditya tomar from India.
I found an issue #54076.
I added nan_count in describe function for dataframe, series and timestamps.
Kindly request to you please check it out.
for your feedback
Mail : adityatomarsvnit@gmail.com
| https://api.github.com/repos/pandas-dev/pandas/pulls/54647 | 2023-08-20T15:17:02Z | 2023-09-18T17:11:13Z | null | 2023-09-18T17:11:13Z |
ENH: Add on_bad_lines for pyarrow | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 8eab623a2b5f7..85cf59dc7135b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -73,6 +73,8 @@ enhancement2
Other enhancements
^^^^^^^^^^^^^^^^^^
+
+- :func:`read_csv` now supports ``on_bad_lines`` parameter with ``engine="pyarrow"``. (:issue:`54480`)
- :meth:`ExtensionArray._explode` interface method added to allow extension type implementations of the ``explode`` method (:issue:`54833`)
- DataFrame.apply now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
-
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index bb6bcd3c4d6a0..765a4ffcd2cb9 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -1,11 +1,17 @@
from __future__ import annotations
from typing import TYPE_CHECKING
+import warnings
from pandas._config import using_pyarrow_string_dtype
from pandas._libs import lib
from pandas.compat._optional import import_optional_dependency
+from pandas.errors import (
+ ParserError,
+ ParserWarning,
+)
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.inference import is_integer
@@ -85,6 +91,30 @@ def _get_pyarrow_options(self) -> None:
and option_name
in ("delimiter", "quote_char", "escape_char", "ignore_empty_lines")
}
+
+ on_bad_lines = self.kwds.get("on_bad_lines")
+ if on_bad_lines is not None:
+ if callable(on_bad_lines):
+ self.parse_options["invalid_row_handler"] = on_bad_lines
+ elif on_bad_lines == ParserBase.BadLineHandleMethod.ERROR:
+ self.parse_options[
+ "invalid_row_handler"
+ ] = None # PyArrow raises an exception by default
+ elif on_bad_lines == ParserBase.BadLineHandleMethod.WARN:
+
+ def handle_warning(invalid_row):
+ warnings.warn(
+ f"Expected {invalid_row.expected_columns} columns, but found "
+ f"{invalid_row.actual_columns}: {invalid_row.text}",
+ ParserWarning,
+ stacklevel=find_stack_level(),
+ )
+ return "skip"
+
+ self.parse_options["invalid_row_handler"] = handle_warning
+ elif on_bad_lines == ParserBase.BadLineHandleMethod.SKIP:
+ self.parse_options["invalid_row_handler"] = lambda _: "skip"
+
self.convert_options = {
option_name: option_value
for option_name, option_value in self.kwds.items()
@@ -190,12 +220,15 @@ def read(self) -> DataFrame:
pyarrow_csv = import_optional_dependency("pyarrow.csv")
self._get_pyarrow_options()
- table = pyarrow_csv.read_csv(
- self.src,
- read_options=pyarrow_csv.ReadOptions(**self.read_options),
- parse_options=pyarrow_csv.ParseOptions(**self.parse_options),
- convert_options=pyarrow_csv.ConvertOptions(**self.convert_options),
- )
+ try:
+ table = pyarrow_csv.read_csv(
+ self.src,
+ read_options=pyarrow_csv.ReadOptions(**self.read_options),
+ parse_options=pyarrow_csv.ParseOptions(**self.parse_options),
+ convert_options=pyarrow_csv.ConvertOptions(**self.convert_options),
+ )
+ except pa.ArrowInvalid as e:
+ raise ParserError(e) from e
dtype_backend = self.kwds["dtype_backend"]
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index acf35ebd6afe5..6ce6ac71b1ddd 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -401,6 +401,13 @@
expected, a ``ParserWarning`` will be emitted while dropping extra elements.
Only supported when ``engine='python'``
+ .. versionchanged:: 2.2.0
+
+ - Callable, function with signature
+ as described in `pyarrow documentation
+ <https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html
+ #pyarrow.csv.ParseOptions.invalid_row_handler>_` when ``engine='pyarrow'``
+
delim_whitespace : bool, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``'\\t'``) will be
used as the ``sep`` delimiter. Equivalent to setting ``sep='\\s+'``. If this option
@@ -494,7 +501,6 @@ class _Fwf_Defaults(TypedDict):
"thousands",
"memory_map",
"dialect",
- "on_bad_lines",
"delim_whitespace",
"quoting",
"lineterminator",
@@ -2142,9 +2148,10 @@ def _refine_defaults_read(
elif on_bad_lines == "skip":
kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.SKIP
elif callable(on_bad_lines):
- if engine != "python":
+ if engine not in ["python", "pyarrow"]:
raise ValueError(
- "on_bad_line can only be a callable function if engine='python'"
+ "on_bad_line can only be a callable function "
+ "if engine='python' or 'pyarrow'"
)
kwds["on_bad_lines"] = on_bad_lines
else:
diff --git a/pandas/tests/io/parser/common/test_read_errors.py b/pandas/tests/io/parser/common/test_read_errors.py
index 0c5a2e0d04e5a..4e82dca83e2d0 100644
--- a/pandas/tests/io/parser/common/test_read_errors.py
+++ b/pandas/tests/io/parser/common/test_read_errors.py
@@ -1,5 +1,5 @@
"""
-Tests that work on both the Python and C engines but do not have a
+Tests that work on the Python, C and PyArrow engines but do not have a
specific classification into the other test modules.
"""
import codecs
@@ -21,7 +21,8 @@
from pandas import DataFrame
import pandas._testing as tm
-pytestmark = pytest.mark.usefixtures("pyarrow_skip")
+xfail_pyarrow = pytest.mark.usefixtures("pyarrow_xfail")
+skip_pyarrow = pytest.mark.usefixtures("pyarrow_skip")
def test_empty_decimal_marker(all_parsers):
@@ -33,10 +34,17 @@ def test_empty_decimal_marker(all_parsers):
msg = "Only length-1 decimal markers supported"
parser = all_parsers
+ if parser.engine == "pyarrow":
+ msg = (
+ "only single character unicode strings can be "
+ "converted to Py_UCS4, got length 0"
+ )
+
with pytest.raises(ValueError, match=msg):
parser.read_csv(StringIO(data), decimal="")
+@skip_pyarrow
def test_bad_stream_exception(all_parsers, csv_dir_path):
# see gh-13652
#
@@ -57,6 +65,7 @@ def test_bad_stream_exception(all_parsers, csv_dir_path):
parser.read_csv(stream)
+@skip_pyarrow
def test_malformed(all_parsers):
# see gh-6607
parser = all_parsers
@@ -71,6 +80,7 @@ def test_malformed(all_parsers):
parser.read_csv(StringIO(data), header=1, comment="#")
+@skip_pyarrow
@pytest.mark.parametrize("nrows", [5, 3, None])
def test_malformed_chunks(all_parsers, nrows):
data = """ignore
@@ -90,6 +100,7 @@ def test_malformed_chunks(all_parsers, nrows):
reader.read(nrows)
+@skip_pyarrow
def test_catch_too_many_names(all_parsers):
# see gh-5156
data = """\
@@ -109,6 +120,7 @@ def test_catch_too_many_names(all_parsers):
parser.read_csv(StringIO(data), header=0, names=["a", "b", "c", "d"])
+@skip_pyarrow
@pytest.mark.parametrize("nrows", [0, 1, 2, 3, 4, 5])
def test_raise_on_no_columns(all_parsers, nrows):
parser = all_parsers
@@ -147,6 +159,10 @@ def test_error_bad_lines(all_parsers):
data = "a\n1\n1,2,3\n4\n5,6,7"
msg = "Expected 1 fields in line 3, saw 3"
+
+ if parser.engine == "pyarrow":
+ msg = "CSV parse error: Expected 1 columns, got 3: 1,2,3"
+
with pytest.raises(ParserError, match=msg):
parser.read_csv(StringIO(data), on_bad_lines="error")
@@ -156,9 +172,13 @@ def test_warn_bad_lines(all_parsers):
parser = all_parsers
data = "a\n1\n1,2,3\n4\n5,6,7"
expected = DataFrame({"a": [1, 4]})
+ match_msg = "Skipping line"
+
+ if parser.engine == "pyarrow":
+ match_msg = "Expected 1 columns, but found 3: 1,2,3"
with tm.assert_produces_warning(
- ParserWarning, match="Skipping line", check_stacklevel=False
+ ParserWarning, match=match_msg, check_stacklevel=False
):
result = parser.read_csv(StringIO(data), on_bad_lines="warn")
tm.assert_frame_equal(result, expected)
@@ -174,10 +194,14 @@ def test_read_csv_wrong_num_columns(all_parsers):
parser = all_parsers
msg = "Expected 6 fields in line 3, saw 7"
+ if parser.engine == "pyarrow":
+ msg = "Expected 6 columns, got 7: 6,7,8,9,10,11,12"
+
with pytest.raises(ParserError, match=msg):
parser.read_csv(StringIO(data))
+@skip_pyarrow
def test_null_byte_char(request, all_parsers):
# see gh-2741
data = "\x00,foo"
@@ -200,6 +224,7 @@ def test_null_byte_char(request, all_parsers):
parser.read_csv(StringIO(data), names=names)
+@skip_pyarrow
@pytest.mark.filterwarnings("always::ResourceWarning")
def test_open_file(request, all_parsers):
# GH 39024
@@ -238,6 +263,8 @@ def test_bad_header_uniform_error(all_parsers):
"Could not construct index. Requested to use 1 "
"number of columns, but 3 left to parse."
)
+ elif parser.engine == "pyarrow":
+ msg = "CSV parse error: Expected 1 columns, got 4: col1,col2,col3,col4"
with pytest.raises(ParserError, match=msg):
parser.read_csv(StringIO(data), index_col=0, on_bad_lines="error")
@@ -253,9 +280,13 @@ def test_on_bad_lines_warn_correct_formatting(all_parsers):
a,b
"""
expected = DataFrame({"1": "a", "2": ["b"] * 2})
+ match_msg = "Skipping line"
+
+ if parser.engine == "pyarrow":
+ match_msg = "Expected 2 columns, but found 3: a,b,c"
with tm.assert_produces_warning(
- ParserWarning, match="Skipping line", check_stacklevel=False
+ ParserWarning, match=match_msg, check_stacklevel=False
):
result = parser.read_csv(StringIO(data), on_bad_lines="warn")
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index ac171568187cd..b489c09e917af 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -151,13 +151,17 @@ def test_pyarrow_engine(self):
with pytest.raises(ValueError, match=msg):
read_csv(StringIO(data), engine="pyarrow", **kwargs)
- def test_on_bad_lines_callable_python_only(self, all_parsers):
+ def test_on_bad_lines_callable_python_or_pyarrow(self, all_parsers):
# GH 5686
+ # GH 54643
sio = StringIO("a,b\n1,2")
bad_lines_func = lambda x: x
parser = all_parsers
- if all_parsers.engine != "python":
- msg = "on_bad_line can only be a callable function if engine='python'"
+ if all_parsers.engine not in ["python", "pyarrow"]:
+ msg = (
+ "on_bad_line can only be a callable "
+ "function if engine='python' or 'pyarrow'"
+ )
with pytest.raises(ValueError, match=msg):
parser.read_csv(sio, on_bad_lines=bad_lines_func)
else:
| - [x] closes #54480
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Update documentation
This adds the `on_bad_lines` argument to the `pyarrow` engine for the `from_csv` parser that closely follows the behaviour of the `python` engine. Internally utilizes [pyarrow's invalid_row_handler](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions.invalid_row_handler). The built-in callable implementation slightly differs for pyarrow, so the difference is appropriately documented by pointing to pyarrow's documentation.
Usage Example:
example.csv:
```
a,b,c
acol1,bcol1,ccol1
acol2,ccol2
```
example.py
```
df_arrow = pd.read_csv(r"example.csv", engine='pyarrow', dtype_backend='pyarrow', on_bad_lines='warn')
print(df_arrow)
```
Console output
```
ParserWarning: Expected 3 columns, but found 2: acol2,ccol2
a b c
0 acol1 bcol1 ccol1
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/54643 | 2023-08-19T18:39:40Z | 2023-09-22T23:46:32Z | 2023-09-22T23:46:32Z | 2023-09-22T23:46:39Z |
improved explanation of linear interpolation in quantile | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 079366a942f8e..07a28d312781a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2683,8 +2683,10 @@ def quantile(
This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:
- * linear: `i + (j - i) * fraction`, where `fraction` is the
- fractional part of the index surrounded by `i` and `j`.
+ * linear: `i + (j - i) * fraction`, where `fraction` is the proportion
+ of the distance between `i` and `j`. it refers to the relative position
+ of the desired quantile value between i and j
+ hence fraction = (desired_quantile - i) / (j - i)
* lower: `i`.
* higher: `j`.
* nearest: `i` or `j` whichever is nearest.
| - [ ] closes #51745 | https://api.github.com/repos/pandas-dev/pandas/pulls/54642 | 2023-08-19T17:30:43Z | 2023-09-18T17:10:57Z | null | 2023-09-18T17:10:57Z |
BUG: getitem indexing wrong axis | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index ca0b1705b5c38..a2871f364f092 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -985,8 +985,9 @@ def _getitem_tuple_same_dim(self, tup: tuple):
"""
retval = self.obj
# Selecting columns before rows is signficiantly faster
+ start_val = (self.ndim - len(tup)) + 1
for i, key in enumerate(reversed(tup)):
- i = self.ndim - i - 1
+ i = self.ndim - i - start_val
if com.is_null_slice(key):
continue
diff --git a/pandas/tests/frame/indexing/test_getitem.py b/pandas/tests/frame/indexing/test_getitem.py
index 9fed2116b2896..9d9324f557c8d 100644
--- a/pandas/tests/frame/indexing/test_getitem.py
+++ b/pandas/tests/frame/indexing/test_getitem.py
@@ -458,6 +458,14 @@ def test_getitem_datetime_slice(self):
):
df["2011-01-01":"2011-11-01"]
+ def test_getitem_slice_same_dim_only_one_axis(self):
+ # GH#54622
+ df = DataFrame(np.random.default_rng(2).standard_normal((10, 8)))
+ result = df.iloc[(slice(None, None, 2),)]
+ assert result.shape == (5, 8)
+ expected = df.iloc[slice(None, None, 2), slice(None)]
+ tm.assert_frame_equal(result, expected)
+
class TestGetitemDeprecatedIndexers:
@pytest.mark.parametrize("key", [{"a", "b"}, {"a": "a"}])
| - [x] closes #54622 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54641 | 2023-08-19T14:54:10Z | 2023-08-21T18:44:23Z | 2023-08-21T18:44:23Z | 2023-08-21T18:45:23Z |
Rewording a sentance in tech docs | diff --git a/doc/source/user_guide/dsintro.rst b/doc/source/user_guide/dsintro.rst
index d60532f5f4027..d1e981ee1bbdc 100644
--- a/doc/source/user_guide/dsintro.rst
+++ b/doc/source/user_guide/dsintro.rst
@@ -308,7 +308,7 @@ The row and column labels can be accessed respectively by accessing the
From dict of ndarrays / lists
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The ndarrays must all be the same length. If an index is passed, it must
+All ndarrays must share the same length. If an index is passed, it must
also be the same length as the arrays. If no index is passed, the
result will be ``range(n)``, where ``n`` is the array length.
| I fixed a sentence grammatically to sound better.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54640 | 2023-08-19T13:25:36Z | 2023-08-21T19:16:26Z | 2023-08-21T19:16:26Z | 2023-08-21T19:16:33Z |
Updated `ruff pre-commit` version and modified few lines according to it. | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 1bda47e0631a0..9f9bcd78c07b0 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -24,7 +24,7 @@ repos:
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.284
+ rev: v0.0.285
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 1679e01ff9fe1..8fd8b10c6fc32 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -138,7 +138,7 @@ def is_bool_indexer(key: Any) -> bool:
elif isinstance(key, list):
# check if np.array(key).dtype would be bool
if len(key) > 0:
- if type(key) is not list:
+ if type(key) is not list: # noqa: E721
# GH#42461 cython will raise TypeError if we pass a subclass
key = list(key)
return lib.is_bool_list(key)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ee36a3515c4b3..721f4f5e1c494 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7528,7 +7528,7 @@ def ensure_index(index_like: Axes, copy: bool = False) -> Index:
index_like = list(index_like)
if isinstance(index_like, list):
- if type(index_like) is not list:
+ if type(index_like) is not list: # noqa: E721
# must check for exactly list here because of strict type
# check in clean_index_list
index_like = list(index_like)
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 2290cd86f35e6..82b25955d0def 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -903,7 +903,7 @@ def _list_of_dict_to_arrays(
# assure that they are of the base dict class and not of derived
# classes
- data = [d if type(d) is dict else dict(d) for d in data]
+ data = [d if type(d) is dict else dict(d) for d in data] # noqa: E721
content = lib.dicts_to_array(data, list(columns))
return content, columns
| #### Ruff 0.0.285 has been released (https://github.com/astral-sh/ruff-pre-commit).
So, I have updated the ruff version in `.pre-commit-config.yaml` file, ran `ruff .` and corrected few files according to the errors it showed.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54639 | 2023-08-19T10:54:09Z | 2023-08-21T18:37:01Z | 2023-08-21T18:37:01Z | 2023-08-21T18:37:08Z |
Backport PR #54633 on branch 2.1.x (DEP: remove python-snappy and brotli as optional dependencies (no longer used)) | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index ffa7732c604a0..638aaedecf2b5 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 5a6a26c2e1ad8..4d925786c22d6 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -28,7 +28,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 9d60d734db5b3..698107341c26d 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
# - pytables>=3.7.0, 3.8.0 is first version that supports 3.11
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 0e2fcf87c2d6e..a0aafb1d772a7 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -29,7 +29,6 @@ dependencies:
- beautifulsoup4=4.11.1
- blosc=1.21.0
- bottleneck=1.3.4
- - brotlipy=0.7.0
- fastparquet=0.8.1
- fsspec=2022.05.0
- html5lib=1.1
@@ -49,7 +48,6 @@ dependencies:
- pymysql=1.0.2
- pyreadstat=1.1.5
- pytables=3.7.0
- - python-snappy=0.6.1
- pyxlsb=1.0.9
- s3fs=2022.05.0
- scipy=1.8.1
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 6ea0d41b947dc..6c6b1de165496 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index df4e8e285bd02..d436e90fa6186 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
# - pyreadstat>=1.1.5 not available on ARM
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 0ab0391ac78a9..ae7c9d4ea9c62 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -412,8 +412,6 @@ Installable with ``pip install "pandas[compression]"``
========================= ================== =============== =============================================================
Dependency Minimum Version pip extra Notes
========================= ================== =============== =============================================================
-brotli 0.7.0 compression Brotli compression
-python-snappy 0.6.1 compression Snappy compression
Zstandard 0.17.0 compression Zstandard compression
========================= ================== =============== =============================================================
diff --git a/environment.yml b/environment.yml
index 3a0da0bfc703d..1a9dffb55bca7 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,7 +27,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.1
- blosc
- - brotlipy>=0.7.0
- bottleneck>=1.3.4
- fastparquet>=0.8.1
- fsspec>=2022.05.0
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index fe4e6457ff08c..c5792fa1379fe 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -18,7 +18,6 @@
"bs4": "4.11.1",
"blosc": "1.21.0",
"bottleneck": "1.3.4",
- "brotli": "0.7.0",
"dataframe-api-compat": "0.1.7",
"fastparquet": "0.8.1",
"fsspec": "2022.05.0",
@@ -41,7 +40,6 @@
"pyxlsb": "1.0.9",
"s3fs": "2022.05.0",
"scipy": "1.8.1",
- "snappy": "0.6.1",
"sqlalchemy": "1.4.36",
"tables": "3.7.0",
"tabulate": "0.8.10",
@@ -60,12 +58,10 @@
INSTALL_MAPPING = {
"bs4": "beautifulsoup4",
"bottleneck": "Bottleneck",
- "brotli": "brotlipy",
"jinja2": "Jinja2",
"lxml.etree": "lxml",
"odf": "odfpy",
"pandas_gbq": "pandas-gbq",
- "snappy": "python-snappy",
"sqlalchemy": "SQLAlchemy",
"tables": "pytables",
}
@@ -75,13 +71,6 @@ def get_version(module: types.ModuleType) -> str:
version = getattr(module, "__version__", None)
if version is None:
- if module.__name__ == "brotli":
- # brotli doesn't contain attributes to confirm it's version
- return ""
- if module.__name__ == "snappy":
- # snappy doesn't contain attributes to confirm it's version
- # See https://github.com/andrix/python-snappy/pull/119
- return ""
raise ImportError(f"Can't determine version for {module.__name__}")
if module.__name__ == "psycopg2":
# psycopg2 appends " (dt dec pq3 ext lo64)" to it's version
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9939daadd9237..56e1b82c992c3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2909,10 +2909,7 @@ def to_parquet(
'pyarrow' is unavailable.
compression : str or None, default 'snappy'
Name of the compression to use. Use ``None`` for no compression.
- The supported compression methods actually depend on which engine
- is used. For 'pyarrow', 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'
- are all supported. For 'fastparquet', only 'gzip' and 'snappy' are
- supported.
+ Supported options: 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output.
If ``False``, they will not be written to the file.
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 91987e6531261..f51b98a929440 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -444,10 +444,7 @@ def to_parquet(
if you wish to use its implementation.
compression : {{'snappy', 'gzip', 'brotli', 'lz4', 'zstd', None}},
default 'snappy'. Name of the compression to use. Use ``None``
- for no compression. The supported compression methods actually
- depend on which engine is used. For 'pyarrow', 'snappy', 'gzip',
- 'brotli', 'lz4', 'zstd' are all supported. For 'fastparquet',
- only 'gzip' and 'snappy' are supported.
+ for no compression.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output. If
``False``, they will not be written to the file.
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index fcc1c218a149d..a4c6cfcf9fe0e 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -404,12 +404,6 @@ def test_columns_dtypes(self, engine):
@pytest.mark.parametrize("compression", [None, "gzip", "snappy", "brotli"])
def test_compression(self, engine, compression):
- if compression == "snappy":
- pytest.importorskip("snappy")
-
- elif compression == "brotli":
- pytest.importorskip("brotli")
-
df = pd.DataFrame({"A": [1, 2, 3]})
check_round_trip(df, engine, write_kwargs={"compression": compression})
diff --git a/pyproject.toml b/pyproject.toml
index 1034196baa15e..d2fb6a8d854d7 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -81,13 +81,12 @@ xml = ['lxml>=4.8.0']
plot = ['matplotlib>=3.6.1']
output_formatting = ['jinja2>=3.1.2', 'tabulate>=0.8.10']
clipboard = ['PyQt5>=5.15.6', 'qtpy>=2.2.0']
-compression = ['brotlipy>=0.7.0', 'python-snappy>=0.6.1', 'zstandard>=0.17.0']
+compression = ['zstandard>=0.17.0']
consortium-standard = ['dataframe-api-compat>=0.1.7']
all = ['beautifulsoup4>=4.11.1',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
#'blosc>=1.21.0',
'bottleneck>=1.3.4',
- 'brotlipy>=0.7.0',
'dataframe-api-compat>=0.1.7',
'fastparquet>=0.8.1',
'fsspec>=2022.05.0',
@@ -110,7 +109,6 @@ all = ['beautifulsoup4>=4.11.1',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
- 'python-snappy>=0.6.1',
'pyxlsb>=1.0.9',
'qtpy>=2.2.0',
'scipy>=1.8.1',
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 0944acbc36c9b..be02007a36333 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -16,7 +16,6 @@ numpy
pytz
beautifulsoup4>=4.11.1
blosc
-brotlipy>=0.7.0
bottleneck>=1.3.4
fastparquet>=0.8.1
fsspec>=2022.05.0
@@ -37,7 +36,6 @@ pyarrow>=7.0.0
pymysql>=1.0.2
pyreadstat>=1.1.5
tables>=3.7.0
-python-snappy>=0.6.1
pyxlsb>=1.0.9
s3fs>=2022.05.0
scipy>=1.8.1
diff --git a/scripts/tests/data/deps_expected_random.yaml b/scripts/tests/data/deps_expected_random.yaml
index 35d7fe74806a9..c70025f8f019d 100644
--- a/scripts/tests/data/deps_expected_random.yaml
+++ b/scripts/tests/data/deps_expected_random.yaml
@@ -26,7 +26,6 @@ dependencies:
- beautifulsoup4>=5.9.3
- blosc
- bottleneck>=1.3.2
- - brotlipy>=0.7.0
- fastparquet>=0.6.3
- fsspec>=2021.07.0
- html5lib>=1.1
@@ -45,7 +44,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
- - python-snappy>=0.6.0
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
diff --git a/scripts/tests/data/deps_minimum.toml b/scripts/tests/data/deps_minimum.toml
index 6f56ca498794b..b43815a982139 100644
--- a/scripts/tests/data/deps_minimum.toml
+++ b/scripts/tests/data/deps_minimum.toml
@@ -77,12 +77,11 @@ xml = ['lxml>=4.6.3']
plot = ['matplotlib>=3.6.1']
output_formatting = ['jinja2>=3.0.0', 'tabulate>=0.8.9']
clipboard = ['PyQt5>=5.15.1', 'qtpy>=2.2.0']
-compression = ['brotlipy>=0.7.0', 'python-snappy>=0.6.0', 'zstandard>=0.15.2']
+compression = ['zstandard>=0.15.2']
all = ['beautifulsoup4>=5.9.3',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
#'blosc>=1.21.0',
'bottleneck>=1.3.2',
- 'brotlipy>=0.7.0',
'fastparquet>=0.6.3',
'fsspec>=2021.07.0',
'gcsfs>=2021.07.0',
@@ -104,7 +103,6 @@ all = ['beautifulsoup4>=5.9.3',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
- 'python-snappy>=0.6.0',
'pyxlsb>=1.0.8',
'qtpy>=2.2.0',
'scipy>=1.7.1',
diff --git a/scripts/tests/data/deps_unmodified_random.yaml b/scripts/tests/data/deps_unmodified_random.yaml
index 405762d33f53e..503eb3c7c7734 100644
--- a/scripts/tests/data/deps_unmodified_random.yaml
+++ b/scripts/tests/data/deps_unmodified_random.yaml
@@ -26,7 +26,6 @@ dependencies:
- beautifulsoup4
- blosc
- bottleneck>=1.3.2
- - brotlipy
- fastparquet>=0.6.3
- fsspec>=2021.07.0
- html5lib>=1.1
@@ -45,7 +44,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
- - python-snappy>=0.6.0
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
| Backport PR #54633: DEP: remove python-snappy and brotli as optional dependencies (no longer used) | https://api.github.com/repos/pandas-dev/pandas/pulls/54637 | 2023-08-19T07:46:38Z | 2023-08-21T18:26:27Z | 2023-08-21T18:26:27Z | 2023-08-21T18:26:28Z |
BUG [fix] prevent potential memory leak | diff --git a/pandas/tests/util/test_find_stack_level.py b/pandas/tests/util/test_find_stack_level.py
new file mode 100644
index 0000000000000..a5e3dde3ddfae
--- /dev/null
+++ b/pandas/tests/util/test_find_stack_level.py
@@ -0,0 +1,14 @@
+from pytest import mark
+
+from pandas.util._exceptions import find_stack_level
+
+
+@mark.parametrize("expected", [1])
+def test_find_stack_level(expected):
+ """
+ Note that this test would not be expected to pass on CPython implementations which
+ don't support getting the frame with currentframe (which would always return None).
+ """
+
+ top_lvl = find_stack_level()
+ assert top_lvl == expected, f"Expected stack level {expected} but got {top_lvl}"
diff --git a/pandas/util/_exceptions.py b/pandas/util/_exceptions.py
index 573f76a63459b..cfe937e45ba02 100644
--- a/pandas/util/_exceptions.py
+++ b/pandas/util/_exceptions.py
@@ -9,6 +9,7 @@
if TYPE_CHECKING:
from collections.abc import Generator
+ from types import FrameType
@contextlib.contextmanager
@@ -42,15 +43,18 @@ def find_stack_level() -> int:
test_dir = os.path.join(pkg_dir, "tests")
# https://stackoverflow.com/questions/17407119/python-inspect-stack-is-slow
- frame = inspect.currentframe()
- n = 0
- while frame:
- fname = inspect.getfile(frame)
- if fname.startswith(pkg_dir) and not fname.startswith(test_dir):
- frame = frame.f_back
- n += 1
- else:
- break
+ frame: FrameType | None = inspect.currentframe()
+ try:
+ n = 0
+ while frame:
+ filename = inspect.getfile(frame)
+ if filename.startswith(pkg_dir) and not filename.startswith(test_dir):
+ frame = frame.f_back
+ n += 1
+ else:
+ break
+ finally:
+ del frame # Prevent potential memory leak
return n
| - [x] closes #54628
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54636 | 2023-08-18T21:06:52Z | 2023-11-07T00:54:32Z | null | 2023-11-07T00:54:32Z |
DOC: add missing parameters n\normalize\offset to offsets classes | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 958fe1181d309..d17ec6031f978 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -1621,6 +1621,8 @@ cdef class BusinessDay(BusinessMixin):
The number of days represented.
normalize : bool, default False
Normalize start/end dates to midnight.
+ offset : timedelta, default timedelta(0)
+ Time offset to apply.
Examples
--------
@@ -3148,6 +3150,10 @@ cdef class Week(SingleConstructorOffset):
Parameters
----------
+ n : int, default 1
+ The number of weeks represented.
+ normalize : bool, default False
+ Normalize start/end dates to midnight before generating date range.
weekday : int or None, default None
Always generate specific day of week.
0 for Monday and 6 for Sunday.
@@ -3398,6 +3404,9 @@ cdef class LastWeekOfMonth(WeekOfMonthMixin):
Parameters
----------
n : int, default 1
+ The number of months represented.
+ normalize : bool, default False
+ Normalize start/end dates to midnight before generating date range.
weekday : int {0, 1, ..., 6}, default 0
A specific integer for the day of the week.
@@ -4150,6 +4159,8 @@ cdef class CustomBusinessHour(BusinessHour):
Start time of your custom business hour in 24h format.
end : str, time, or list of str/time, default: "17:00"
End time of your custom business hour in 24h format.
+ offset : timedelta, default timedelta(0)
+ Time offset to apply.
Examples
--------
| xref #52431
added missing parameters `"n" \ "normalize" \ "offset"` to `Week` , `LastWeekOfMonth`, `BusinessDay`, and `CustomBusinessHour`
| https://api.github.com/repos/pandas-dev/pandas/pulls/54635 | 2023-08-18T20:12:15Z | 2023-08-21T19:18:08Z | 2023-08-21T19:18:08Z | 2023-08-21T19:18:14Z |
DEPR: make arguments keyword only in to_clipboard | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d8b63a6d1395d..7384c512f180b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,6 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_clipboard`. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf``. (:issue:`54229`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index cf60717011222..cc68aa3db6908 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3098,6 +3098,9 @@ def to_pickle(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self"], name="to_clipboard"
+ )
def to_clipboard(
self, excel: bool_t = True, sep: str | None = None, **kwargs
) -> None:
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 4b3c82ad3f083..10e0467c5d74d 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -471,3 +471,13 @@ def test_invalid_dtype_backend(self):
)
with pytest.raises(ValueError, match=msg):
read_clipboard(dtype_backend="numpy")
+
+ def test_to_clipboard_pos_args_deprecation(self):
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_clipboard "
+ r"will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_clipboard(True, None)
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54634 | 2023-08-18T20:03:26Z | 2023-08-22T18:57:21Z | 2023-08-22T18:57:21Z | 2023-08-22T20:01:38Z |
DEP: remove python-snappy and brotli as optional dependencies (no longer used) | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index ffa7732c604a0..638aaedecf2b5 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 5a6a26c2e1ad8..4d925786c22d6 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -28,7 +28,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 9d60d734db5b3..698107341c26d 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
# - pytables>=3.7.0, 3.8.0 is first version that supports 3.11
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 0e2fcf87c2d6e..a0aafb1d772a7 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -29,7 +29,6 @@ dependencies:
- beautifulsoup4=4.11.1
- blosc=1.21.0
- bottleneck=1.3.4
- - brotlipy=0.7.0
- fastparquet=0.8.1
- fsspec=2022.05.0
- html5lib=1.1
@@ -49,7 +48,6 @@ dependencies:
- pymysql=1.0.2
- pyreadstat=1.1.5
- pytables=3.7.0
- - python-snappy=0.6.1
- pyxlsb=1.0.9
- s3fs=2022.05.0
- scipy=1.8.1
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 6ea0d41b947dc..6c6b1de165496 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -47,7 +46,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index df4e8e285bd02..d436e90fa6186 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -27,7 +27,6 @@ dependencies:
- beautifulsoup4>=4.11.1
- blosc>=1.21.0
- bottleneck>=1.3.4
- - brotlipy>=0.7.0
- fastparquet>=0.8.1
- fsspec>=2022.05.0
- html5lib>=1.1
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
# - pyreadstat>=1.1.5 not available on ARM
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 0ab0391ac78a9..ae7c9d4ea9c62 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -412,8 +412,6 @@ Installable with ``pip install "pandas[compression]"``
========================= ================== =============== =============================================================
Dependency Minimum Version pip extra Notes
========================= ================== =============== =============================================================
-brotli 0.7.0 compression Brotli compression
-python-snappy 0.6.1 compression Snappy compression
Zstandard 0.17.0 compression Zstandard compression
========================= ================== =============== =============================================================
diff --git a/environment.yml b/environment.yml
index 3a0da0bfc703d..1a9dffb55bca7 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,7 +27,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.1
- blosc
- - brotlipy>=0.7.0
- bottleneck>=1.3.4
- fastparquet>=0.8.1
- fsspec>=2022.05.0
@@ -48,7 +47,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
- - python-snappy>=0.6.1
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index fe4e6457ff08c..c5792fa1379fe 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -18,7 +18,6 @@
"bs4": "4.11.1",
"blosc": "1.21.0",
"bottleneck": "1.3.4",
- "brotli": "0.7.0",
"dataframe-api-compat": "0.1.7",
"fastparquet": "0.8.1",
"fsspec": "2022.05.0",
@@ -41,7 +40,6 @@
"pyxlsb": "1.0.9",
"s3fs": "2022.05.0",
"scipy": "1.8.1",
- "snappy": "0.6.1",
"sqlalchemy": "1.4.36",
"tables": "3.7.0",
"tabulate": "0.8.10",
@@ -60,12 +58,10 @@
INSTALL_MAPPING = {
"bs4": "beautifulsoup4",
"bottleneck": "Bottleneck",
- "brotli": "brotlipy",
"jinja2": "Jinja2",
"lxml.etree": "lxml",
"odf": "odfpy",
"pandas_gbq": "pandas-gbq",
- "snappy": "python-snappy",
"sqlalchemy": "SQLAlchemy",
"tables": "pytables",
}
@@ -75,13 +71,6 @@ def get_version(module: types.ModuleType) -> str:
version = getattr(module, "__version__", None)
if version is None:
- if module.__name__ == "brotli":
- # brotli doesn't contain attributes to confirm it's version
- return ""
- if module.__name__ == "snappy":
- # snappy doesn't contain attributes to confirm it's version
- # See https://github.com/andrix/python-snappy/pull/119
- return ""
raise ImportError(f"Can't determine version for {module.__name__}")
if module.__name__ == "psycopg2":
# psycopg2 appends " (dt dec pq3 ext lo64)" to it's version
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c2a3d9285386e..25fc5bd6664f5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2916,10 +2916,7 @@ def to_parquet(
'pyarrow' is unavailable.
compression : str or None, default 'snappy'
Name of the compression to use. Use ``None`` for no compression.
- The supported compression methods actually depend on which engine
- is used. For 'pyarrow', 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'
- are all supported. For 'fastparquet', only 'gzip' and 'snappy' are
- supported.
+ Supported options: 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output.
If ``False``, they will not be written to the file.
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 91987e6531261..f51b98a929440 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -444,10 +444,7 @@ def to_parquet(
if you wish to use its implementation.
compression : {{'snappy', 'gzip', 'brotli', 'lz4', 'zstd', None}},
default 'snappy'. Name of the compression to use. Use ``None``
- for no compression. The supported compression methods actually
- depend on which engine is used. For 'pyarrow', 'snappy', 'gzip',
- 'brotli', 'lz4', 'zstd' are all supported. For 'fastparquet',
- only 'gzip' and 'snappy' are supported.
+ for no compression.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output. If
``False``, they will not be written to the file.
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index fcc1c218a149d..a4c6cfcf9fe0e 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -404,12 +404,6 @@ def test_columns_dtypes(self, engine):
@pytest.mark.parametrize("compression", [None, "gzip", "snappy", "brotli"])
def test_compression(self, engine, compression):
- if compression == "snappy":
- pytest.importorskip("snappy")
-
- elif compression == "brotli":
- pytest.importorskip("brotli")
-
df = pd.DataFrame({"A": [1, 2, 3]})
check_round_trip(df, engine, write_kwargs={"compression": compression})
diff --git a/pyproject.toml b/pyproject.toml
index c28f9259c749c..9af86a6bdcd16 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -81,13 +81,12 @@ xml = ['lxml>=4.8.0']
plot = ['matplotlib>=3.6.1']
output_formatting = ['jinja2>=3.1.2', 'tabulate>=0.8.10']
clipboard = ['PyQt5>=5.15.6', 'qtpy>=2.2.0']
-compression = ['brotlipy>=0.7.0', 'python-snappy>=0.6.1', 'zstandard>=0.17.0']
+compression = ['zstandard>=0.17.0']
consortium-standard = ['dataframe-api-compat>=0.1.7']
all = ['beautifulsoup4>=4.11.1',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
#'blosc>=1.21.0',
'bottleneck>=1.3.4',
- 'brotlipy>=0.7.0',
'dataframe-api-compat>=0.1.7',
'fastparquet>=0.8.1',
'fsspec>=2022.05.0',
@@ -110,7 +109,6 @@ all = ['beautifulsoup4>=4.11.1',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
- 'python-snappy>=0.6.1',
'pyxlsb>=1.0.9',
'qtpy>=2.2.0',
'scipy>=1.8.1',
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 0944acbc36c9b..be02007a36333 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -16,7 +16,6 @@ numpy
pytz
beautifulsoup4>=4.11.1
blosc
-brotlipy>=0.7.0
bottleneck>=1.3.4
fastparquet>=0.8.1
fsspec>=2022.05.0
@@ -37,7 +36,6 @@ pyarrow>=7.0.0
pymysql>=1.0.2
pyreadstat>=1.1.5
tables>=3.7.0
-python-snappy>=0.6.1
pyxlsb>=1.0.9
s3fs>=2022.05.0
scipy>=1.8.1
diff --git a/scripts/tests/data/deps_expected_random.yaml b/scripts/tests/data/deps_expected_random.yaml
index 35d7fe74806a9..c70025f8f019d 100644
--- a/scripts/tests/data/deps_expected_random.yaml
+++ b/scripts/tests/data/deps_expected_random.yaml
@@ -26,7 +26,6 @@ dependencies:
- beautifulsoup4>=5.9.3
- blosc
- bottleneck>=1.3.2
- - brotlipy>=0.7.0
- fastparquet>=0.6.3
- fsspec>=2021.07.0
- html5lib>=1.1
@@ -45,7 +44,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
- - python-snappy>=0.6.0
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
diff --git a/scripts/tests/data/deps_minimum.toml b/scripts/tests/data/deps_minimum.toml
index 6f56ca498794b..b43815a982139 100644
--- a/scripts/tests/data/deps_minimum.toml
+++ b/scripts/tests/data/deps_minimum.toml
@@ -77,12 +77,11 @@ xml = ['lxml>=4.6.3']
plot = ['matplotlib>=3.6.1']
output_formatting = ['jinja2>=3.0.0', 'tabulate>=0.8.9']
clipboard = ['PyQt5>=5.15.1', 'qtpy>=2.2.0']
-compression = ['brotlipy>=0.7.0', 'python-snappy>=0.6.0', 'zstandard>=0.15.2']
+compression = ['zstandard>=0.15.2']
all = ['beautifulsoup4>=5.9.3',
# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
#'blosc>=1.21.0',
'bottleneck>=1.3.2',
- 'brotlipy>=0.7.0',
'fastparquet>=0.6.3',
'fsspec>=2021.07.0',
'gcsfs>=2021.07.0',
@@ -104,7 +103,6 @@ all = ['beautifulsoup4>=5.9.3',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
- 'python-snappy>=0.6.0',
'pyxlsb>=1.0.8',
'qtpy>=2.2.0',
'scipy>=1.7.1',
diff --git a/scripts/tests/data/deps_unmodified_random.yaml b/scripts/tests/data/deps_unmodified_random.yaml
index 405762d33f53e..503eb3c7c7734 100644
--- a/scripts/tests/data/deps_unmodified_random.yaml
+++ b/scripts/tests/data/deps_unmodified_random.yaml
@@ -26,7 +26,6 @@ dependencies:
- beautifulsoup4
- blosc
- bottleneck>=1.3.2
- - brotlipy
- fastparquet>=0.6.3
- fsspec>=2021.07.0
- html5lib>=1.1
@@ -45,7 +44,6 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
- - python-snappy>=0.6.0
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
| See https://github.com/pandas-dev/pandas/issues/32417#issuecomment-1684356585 for context.
The `python-snappy` and `brotli` packages were in the past a dependency of fastparquet, but that is no longer the case (https://github.com/dask/fastparquet/commit/2e657224a7250d49aa2e4b8c457a98e008687c21, replaced by `cramjam` that provides both compression options).
I am not sure if there was another reason that we included this in our optional dependencies, but I didn't find any other usage in our codebase.
Closes #32417 | https://api.github.com/repos/pandas-dev/pandas/pulls/54633 | 2023-08-18T19:50:48Z | 2023-08-19T07:45:34Z | 2023-08-19T07:45:34Z | 2023-08-19T07:45:41Z |
DEPR: deprecated nonkeyword arguments in to_csv | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 6fdffb4d78341..d8b63a6d1395d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,6 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7da41b890598d..cf60717011222 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3728,6 +3728,9 @@ def to_csv(
...
@final
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "path_or_buf"], name="to_csv"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path_or_buf",
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index c8e984a92f418..822bd14610388 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -731,3 +731,15 @@ def test_to_csv_iterative_compression_buffer(compression):
pd.read_csv(buffer, compression=compression, index_col=0), df
)
assert not buffer.closed
+
+
+def test_to_csv_pos_args_deprecation():
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_csv except for the "
+ r"argument 'path_or_buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ buffer = io.BytesIO()
+ df.to_csv(buffer, ";")
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54632 | 2023-08-18T19:46:39Z | 2023-08-21T23:20:42Z | 2023-08-21T23:20:42Z | 2023-08-21T23:20:48Z |
DEPR: deprecated nonkeyword arguments in to_parquet | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d8b63a6d1395d..430f61ee6827b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -98,6 +98,7 @@ Deprecations
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_markdown` except ``buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_parquet` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
- Deprecated not passing a tuple to :class:`DataFrameGroupBy.get_group` or :class:`SeriesGroupBy.get_group` when grouping by a length-1 list-like (:issue:`25971`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 05c0db0c09376..2bbab10be45ad 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2878,6 +2878,9 @@ def to_parquet(
) -> None:
...
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "path"], name="to_parquet"
+ )
@doc(storage_options=_shared_docs["storage_options"])
def to_parquet(
self,
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index a4c6cfcf9fe0e..9182e4c4e7674 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -359,6 +359,20 @@ def test_cross_engine_fp_pa(df_cross_compat, pa, fp):
tm.assert_frame_equal(result, df[["a", "d"]])
+def test_parquet_pos_args_deprecation(engine):
+ # GH-54229
+ df = pd.DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_parquet except for the "
+ r"argument 'path' will be keyword-only."
+ )
+ with tm.ensure_clean() as path:
+ with tm.assert_produces_warning(
+ FutureWarning, match=msg, check_stacklevel=False
+ ):
+ df.to_parquet(path, engine)
+
+
class Base:
def check_error_on_write(self, df, engine, exc, err_msg):
# check that we are raising the exception on writing
@@ -998,7 +1012,7 @@ def test_filter_row_groups(self, pa):
pytest.importorskip("pyarrow")
df = pd.DataFrame({"a": list(range(0, 3))})
with tm.ensure_clean() as path:
- df.to_parquet(path, pa)
+ df.to_parquet(path, engine=pa)
result = read_parquet(
path, pa, filters=[("a", "==", 0)], use_legacy_dataset=False
)
@@ -1011,7 +1025,7 @@ def test_read_parquet_manager(self, pa, using_array_manager):
)
with tm.ensure_clean() as path:
- df.to_parquet(path, pa)
+ df.to_parquet(path, engine=pa)
result = read_parquet(path, pa)
if using_array_manager:
assert isinstance(result._mgr, pd.core.internals.ArrayManager)
@@ -1177,7 +1191,7 @@ def test_filter_row_groups(self, fp):
d = {"a": list(range(0, 3))}
df = pd.DataFrame(d)
with tm.ensure_clean() as path:
- df.to_parquet(path, fp, compression=None, row_group_offsets=1)
+ df.to_parquet(path, engine=fp, compression=None, row_group_offsets=1)
result = read_parquet(path, fp, filters=[("a", "==", 0)])
assert len(result) == 1
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54631 | 2023-08-18T19:38:03Z | 2023-08-22T16:39:18Z | 2023-08-22T16:39:17Z | 2023-08-22T16:49:06Z |
DEPR: make arguments keyword only in to_dict | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d8b63a6d1395d..105efe38cccf6 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -93,6 +93,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_dict`. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 05c0db0c09376..23355d9b6c42f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1933,6 +1933,9 @@ def to_dict(
def to_dict(self, orient: Literal["records"], into: type[dict] = ...) -> list[dict]:
...
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "orient"], name="to_dict"
+ )
def to_dict(
self,
orient: Literal[
diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py
index 1118ad88d5092..7bb9518f9b0f9 100644
--- a/pandas/tests/frame/methods/test_to_dict.py
+++ b/pandas/tests/frame/methods/test_to_dict.py
@@ -99,19 +99,19 @@ def test_to_dict(self, mapping):
for k2, v2 in v.items():
assert v2 == recons_data[k][k2]
- recons_data = DataFrame(test_data).to_dict("list", mapping)
+ recons_data = DataFrame(test_data).to_dict("list", into=mapping)
for k, v in test_data.items():
for k2, v2 in v.items():
assert v2 == recons_data[k][int(k2) - 1]
- recons_data = DataFrame(test_data).to_dict("series", mapping)
+ recons_data = DataFrame(test_data).to_dict("series", into=mapping)
for k, v in test_data.items():
for k2, v2 in v.items():
assert v2 == recons_data[k][k2]
- recons_data = DataFrame(test_data).to_dict("split", mapping)
+ recons_data = DataFrame(test_data).to_dict("split", into=mapping)
expected_split = {
"columns": ["A", "B"],
"index": ["1", "2", "3"],
@@ -119,7 +119,7 @@ def test_to_dict(self, mapping):
}
tm.assert_dict_equal(recons_data, expected_split)
- recons_data = DataFrame(test_data).to_dict("records", mapping)
+ recons_data = DataFrame(test_data).to_dict("records", into=mapping)
expected_records = [
{"A": 1.0, "B": "1"},
{"A": 2.0, "B": "2"},
@@ -494,3 +494,13 @@ def test_to_dict_masked_native_python(self):
df = DataFrame({"a": Series([1, NA], dtype="Int64"), "B": 1})
result = df.to_dict(orient="records")
assert isinstance(result[0]["a"], int)
+
+ def test_to_dict_pos_args_deprecation(self):
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_dict except for the "
+ r"argument 'orient' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_dict("records", {})
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54630 | 2023-08-18T19:23:45Z | 2023-08-22T18:59:42Z | 2023-08-22T18:59:42Z | 2023-08-22T20:01:59Z |
BUG: groupby.var with ArrowDtype(pa.decimal128) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..fda75376bb1bf 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -798,6 +798,7 @@ Groupby/resample/rolling
- Bug in :meth:`.SeriesGroupBy.nth` and :meth:`.DataFrameGroupBy.nth` after performing column selection when using ``dropna="any"`` or ``dropna="all"`` would not subset columns (:issue:`53518`)
- Bug in :meth:`.SeriesGroupBy.nth` and :meth:`.DataFrameGroupBy.nth` raised after performing column selection when using ``dropna="any"`` or ``dropna="all"`` resulted in rows being dropped (:issue:`53518`)
- Bug in :meth:`.SeriesGroupBy.sum` and :meth:`.DataFrameGroupBy.sum` summing ``np.inf + np.inf`` and ``(-np.inf) + (-np.inf)`` to ``np.nan`` instead of ``np.inf`` and ``-np.inf`` respectively (:issue:`53606`)
+- Bug in :meth:`.SeriesGroupBy.var` and :meth:`.DataFrameGroupBy.var` where the dtype would be ``np.float64`` for data with :class:`ArrowDtype` with ``pyarrow.decimal128`` type (:issue:`54627`)
- Bug in :meth:`Series.groupby` raising an error when grouped :class:`Series` has a :class:`DatetimeIndex` index and a :class:`Series` with a name that is a month is given to the ``by`` argument (:issue:`48509`)
Reshaping
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 3c65e6b4879e2..4cbb7a5f09024 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -47,6 +47,7 @@
ExtensionArray,
ExtensionArraySupportsAnyAll,
)
+from pandas.core.arrays.floating import Float64Dtype
from pandas.core.arrays.masked import BaseMaskedArray
from pandas.core.arrays.string_ import StringDtype
import pandas.core.common as com
@@ -1942,12 +1943,16 @@ def _to_masked(self):
if pa.types.is_floating(pa_dtype) or pa.types.is_integer(pa_dtype):
na_value = 1
+ dtype = _arrow_dtype_mapping()[pa_dtype]
elif pa.types.is_boolean(pa_dtype):
na_value = True
+ dtype = _arrow_dtype_mapping()[pa_dtype]
+ elif pa.types.is_decimal(pa_dtype):
+ na_value = 1
+ dtype = Float64Dtype()
else:
raise NotImplementedError
- dtype = _arrow_dtype_mapping()[pa_dtype]
mask = self.isna()
arr = self.to_numpy(dtype=dtype.numpy_dtype, na_value=na_value)
return dtype.construct_array_type()(arr, mask)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 5955cfc2ef5e4..595ec9d742966 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2960,6 +2960,24 @@ def test_groupby_count_return_arrow_dtype(data_missing):
tm.assert_frame_equal(result, expected)
+def test_groupby_var_decimal_return_arrow_dtype():
+ # GH 54627
+ df = pd.DataFrame(
+ {
+ "A": pd.Series([True, True], dtype="bool[pyarrow]"),
+ "B": pd.Series([123, 12], dtype=ArrowDtype(pa.decimal128(6, 3))),
+ }
+ )
+ result = df.groupby("A").var()
+ expected = pd.DataFrame(
+ [6160.5],
+ index=pd.Index([True], dtype="bool[pyarrow]", name="A"),
+ columns=["B"],
+ dtype="double[pyarrow]",
+ )
+ tm.assert_frame_equal(result, expected)
+
+
def test_arrowextensiondtype_dataframe_repr():
# GH 54062
df = pd.DataFrame(
| - [x] closes #54627 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I don't think modifying `_arrow_dtype_mapping` is completely correct because I'm not sure we want to do this conversion all the time.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54629 | 2023-08-18T18:34:23Z | 2023-08-18T20:22:25Z | null | 2023-08-18T20:22:30Z |
indententations on line 9209 have been fixed | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1e10e8f11a575..68ecca01cf07a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9221,10 +9221,10 @@ def stack(
DataFrame. The new inner-most levels are created by pivoting the
columns of the current dataframe:
- - if the columns have a single level, the output is a Series;
- - if the columns have multiple levels, the new index
- level(s) is (are) taken from the prescribed level(s) and
- the output is a DataFrame.
+ - if the columns have a single level, the output is a Series;
+ - if the columns have multiple levels, the new index
+ level(s) is (are) taken from the prescribed level(s) and
+ the output is a DataFrame.
Parameters
----------
| All I did is suppress the wrong indentations and put the text in a row
| https://api.github.com/repos/pandas-dev/pandas/pulls/54626 | 2023-08-18T14:59:31Z | 2023-09-11T16:36:37Z | null | 2023-09-11T16:36:37Z |
BUG: Fix error in printing timezone series | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..7a1a61bed826a 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -664,6 +664,7 @@ Datetimelike
- Bug in constructing a :class:`Timestamp` from a string representing a time without a date inferring an incorrect unit (:issue:`54097`)
- Bug in constructing a :class:`Timestamp` with ``ts_input=pd.NA`` raising ``TypeError`` (:issue:`45481`)
- Bug in parsing datetime strings with weekday but no day e.g. "2023 Sept Thu" incorrectly raising ``AttributeError`` instead of ``ValueError`` (:issue:`52659`)
+- Bug in the repr for :class:`Series` when dtype is a timezone aware datetime with non-nanosecond resolution raising ``OutOfBoundsDatetime`` (:issue:`54623`)
Timedelta
^^^^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index ff26abd5cc26c..2297f7945a264 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1830,8 +1830,8 @@ def get_format_datetime64_from_values(
class Datetime64TZFormatter(Datetime64Formatter):
def _format_strings(self) -> list[str]:
"""we by definition have a TZ"""
+ ido = is_dates_only(self.values)
values = self.values.astype(object)
- ido = is_dates_only(values)
formatter = self.formatter or get_format_datetime64(
ido, date_format=self.date_format
)
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 8341dda1597bb..fbc5cdd6953ff 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -3320,6 +3320,14 @@ def format_func(x):
result = formatter.get_result()
assert result == ["10:10", "12:12"]
+ def test_datetime64formatter_tz_ms(self):
+ x = Series(
+ np.array(["2999-01-01", "2999-01-02", "NaT"], dtype="datetime64[ms]")
+ ).dt.tz_localize("US/Pacific")
+ result = fmt.Datetime64TZFormatter(x).get_result()
+ assert result[0].strip() == "2999-01-01 00:00:00-08:00"
+ assert result[1].strip() == "2999-01-02 00:00:00-08:00"
+
class TestNaTFormatting:
def test_repr(self):
| - [x] closes #54623
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54625 | 2023-08-18T14:24:22Z | 2023-08-22T23:55:27Z | 2023-08-22T23:55:27Z | 2023-08-24T12:01:18Z |
add the words 'of very good level' | diff --git a/README.md b/README.md
index 8ea473beb107e..eefea81dc0b67 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@
## What is it?
-**pandas** is a Python package that provides fast, flexible, and expressive data
+**pandas** is a Python package of very good level that provides fast, flexible, and expressive data
structures designed to make working with "relational" or "labeled" data both
easy and intuitive. It aims to be the fundamental high-level building block for
doing practical, **real world** data analysis in Python. Additionally, it has
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54624 | 2023-08-18T14:04:58Z | 2023-08-18T16:41:39Z | null | 2023-08-18T16:41:39Z |
made path_or_buffer rely on position | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c28ae86985896..87d499dba2678 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2358,7 +2358,8 @@ def to_excel(
)
def to_json(
self,
- path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ path: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ /,
orient: Literal["split", "records", "index", "table", "columns", "values"]
| None = None,
date_format: str | None = None,
@@ -2372,6 +2373,7 @@ def to_json(
indent: int | None = None,
storage_options: StorageOptions | None = None,
mode: Literal["a", "w"] = "w",
+ path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
) -> str | None:
"""
Convert the object to a JSON string.
@@ -2616,6 +2618,14 @@ def to_json(
"""
from pandas.io import json
+ # validate input
+ if path_or_buf is not None and path is not None:
+ raise ValueError(
+ "pass the path as the first argument and don't pass it twice"
+ )
+ if path is not None:
+ path_or_buf = path
+
if date_format is None and orient == "table":
date_format = "iso"
elif date_format is None:
@@ -2647,8 +2657,9 @@ def to_json(
)
def to_hdf(
self,
- path_or_buf: FilePath | HDFStore,
- key: str,
+ path: FilePath | HDFStore,
+ /,
+ key: str | None = None,
mode: Literal["a", "w", "r+"] = "a",
complevel: int | None = None,
complib: Literal["zlib", "lzo", "bzip2", "blosc"] | None = None,
@@ -2661,6 +2672,7 @@ def to_hdf(
data_columns: Literal[True] | list[str] | None = None,
errors: OpenFileErrors = "strict",
encoding: str = "UTF-8",
+ path_or_buf: FilePath | HDFStore | None = None,
) -> None:
"""
Write the contained data to an HDF5 file using HDFStore.
@@ -2775,6 +2787,18 @@ def to_hdf(
"""
from pandas.io import pytables
+ # validate input
+ if path_or_buf is None and path is None:
+ raise ValueError("you need to insert a path")
+ if path_or_buf is not None and path is not None:
+ raise ValueError(
+ "pass the path as the first argument and don't pass it twice"
+ )
+ if key is None:
+ raise TypeError("missing 1 required argument: 'key'")
+ if path is not None:
+ path_or_buf = path
+
# Argument 3 to "to_hdf" has incompatible type "NDFrame"; expected
# "Union[DataFrame, Series]" [arg-type]
pytables.to_hdf(
@@ -3673,7 +3697,8 @@ def _to_latex_via_styler(
@overload
def to_csv(
self,
- path_or_buf: None = ...,
+ path: None = ...,
+ /,
sep: str = ...,
na_rep: str = ...,
float_format: str | Callable | None = ...,
@@ -3694,13 +3719,14 @@ def to_csv(
decimal: str = ...,
errors: OpenFileErrors = ...,
storage_options: StorageOptions = ...,
+ path_or_buf: None = ...,
) -> str:
...
@overload
def to_csv(
self,
- path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str],
+ path: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
sep: str = ...,
na_rep: str = ...,
float_format: str | Callable | None = ...,
@@ -3721,6 +3747,7 @@ def to_csv(
decimal: str = ...,
errors: OpenFileErrors = ...,
storage_options: StorageOptions = ...,
+ path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
) -> None:
...
@@ -3731,7 +3758,8 @@ def to_csv(
)
def to_csv(
self,
- path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ path: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ /,
sep: str = ",",
na_rep: str = "",
float_format: str | Callable | None = None,
@@ -3752,6 +3780,7 @@ def to_csv(
decimal: str = ".",
errors: OpenFileErrors = "strict",
storage_options: StorageOptions | None = None,
+ path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
) -> str | None:
r"""
Write object to a comma-separated values (csv) file.
@@ -3895,6 +3924,10 @@ def to_csv(
>>> os.makedirs('folder/subfolder', exist_ok=True) # doctest: +SKIP
>>> df.to_csv('folder/subfolder/out.csv') # doctest: +SKIP
"""
+
+ if path is not None:
+ path_or_buf = path
+
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
formatter = DataFrameFormatter(
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index ff26abd5cc26c..88723c8edf713 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1102,7 +1102,8 @@ def to_string(
def to_csv(
self,
- path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ path: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
+ /,
encoding: str | None = None,
sep: str = ",",
columns: Sequence[Hashable] | None = None,
@@ -1118,12 +1119,23 @@ def to_csv(
escapechar: str | None = None,
errors: str = "strict",
storage_options: StorageOptions | None = None,
+ path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None,
) -> str | None:
"""
Render dataframe as comma-separated file.
"""
from pandas.io.formats.csvs import CSVFormatter
+ # validate input
+ if path_or_buf is None and path is None:
+ raise ValueError("you need to insert a path")
+ if path_or_buf is not None and path is not None:
+ raise ValueError(
+ "pass the path as the first argument and don't pass it twice"
+ )
+ if path is not None:
+ path_or_buf = path
+
if path_or_buf is None:
created_buffer = True
path_or_buf = StringIO()
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 833f4986b6da6..d0005e3df0e73 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -94,8 +94,9 @@
# interface to/from
@overload
def to_json(
- path_or_buf: FilePath | WriteBuffer[str] | WriteBuffer[bytes],
- obj: NDFrame,
+ path: FilePath | WriteBuffer[str] | WriteBuffer[bytes] | None = None,
+ /,
+ obj: NDFrame | None = None,
orient: str | None = ...,
date_format: str = ...,
double_precision: int = ...,
@@ -108,14 +109,16 @@ def to_json(
indent: int = ...,
storage_options: StorageOptions = ...,
mode: Literal["a", "w"] = ...,
+ path_or_buf: FilePath | WriteBuffer[str] | WriteBuffer[bytes] | None = None,
) -> None:
...
@overload
def to_json(
- path_or_buf: None,
- obj: NDFrame,
+ path: None = None,
+ /,
+ obj: NDFrame | None = None,
orient: str | None = ...,
date_format: str = ...,
double_precision: int = ...,
@@ -128,13 +131,15 @@ def to_json(
indent: int = ...,
storage_options: StorageOptions = ...,
mode: Literal["a", "w"] = ...,
+ path_or_buf: None = None,
) -> str:
...
def to_json(
- path_or_buf: FilePath | WriteBuffer[str] | WriteBuffer[bytes] | None,
- obj: NDFrame,
+ path: FilePath | WriteBuffer[str] | WriteBuffer[bytes] | None = None,
+ /,
+ obj: NDFrame | None = None,
orient: str | None = None,
date_format: str = "epoch",
double_precision: int = 10,
@@ -147,7 +152,16 @@ def to_json(
indent: int = 0,
storage_options: StorageOptions | None = None,
mode: Literal["a", "w"] = "w",
+ path_or_buf: FilePath | WriteBuffer[str] | WriteBuffer[bytes] | None = None,
) -> str | None:
+ # validate input
+ if obj is None:
+ raise TypeError("missing 1 required positional argument: 'obj'")
+ if path_or_buf is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ path_or_buf = path
+
if orient in ["records", "values"] and index is True:
raise ValueError(
"'index=True' is only valid when 'orient' is 'split', 'table', "
@@ -399,7 +413,8 @@ def obj_to_write(self) -> NDFrame | Mapping[IndexLabel, Any]:
@overload
def read_json(
- path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
+ /,
*,
orient: str | None = ...,
typ: Literal["frame"] = ...,
@@ -418,13 +433,15 @@ def read_json(
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
engine: JSONEngine = ...,
+ path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
) -> JsonReader[Literal["frame"]]:
...
@overload
def read_json(
- path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
+ /,
*,
orient: str | None = ...,
typ: Literal["series"],
@@ -443,13 +460,15 @@ def read_json(
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
engine: JSONEngine = ...,
+ path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
) -> JsonReader[Literal["series"]]:
...
@overload
def read_json(
- path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
+ /,
*,
orient: str | None = ...,
typ: Literal["series"],
@@ -468,13 +487,15 @@ def read_json(
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
engine: JSONEngine = ...,
+ path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
) -> Series:
...
@overload
def read_json(
- path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
+ /,
*,
orient: str | None = ...,
typ: Literal["frame"] = ...,
@@ -493,6 +514,7 @@ def read_json(
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
engine: JSONEngine = ...,
+ path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
) -> DataFrame:
...
@@ -502,7 +524,8 @@ def read_json(
decompression_options=_shared_docs["decompression_options"] % "path_or_buf",
)
def read_json(
- path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
+ /,
*,
orient: str | None = None,
typ: Literal["frame", "series"] = "frame",
@@ -521,6 +544,7 @@ def read_json(
storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
engine: JSONEngine = "ujson",
+ path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
) -> DataFrame | Series | JsonReader:
"""
Convert a JSON string to pandas object.
@@ -760,6 +784,15 @@ def read_json(
}}\
'
"""
+
+ # validate Input
+ if path_or_buf is None and path is None:
+ raise ValueError("you need to insert a path")
+ if path_or_buf is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ path_or_buf = path
+
if orient == "table" and dtype:
raise ValueError("cannot pass both dtype and orient='table'")
if orient == "table" and convert_axes:
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 50dee463a06eb..f8fd0902d72da 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -621,7 +621,8 @@ def _read(
# iterator=True -> TextFileReader
@overload
def read_csv(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -671,6 +672,10 @@ def read_csv(
float_precision: Literal["high", "legacy"] | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> TextFileReader:
...
@@ -678,7 +683,8 @@ def read_csv(
# chunksize=int -> TextFileReader
@overload
def read_csv(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -728,6 +734,10 @@ def read_csv(
float_precision: Literal["high", "legacy"] | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> TextFileReader:
...
@@ -735,7 +745,8 @@ def read_csv(
# default case -> DataFrame
@overload
def read_csv(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -785,6 +796,10 @@ def read_csv(
float_precision: Literal["high", "legacy"] | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame:
...
@@ -792,7 +807,8 @@ def read_csv(
# Unions -> DataFrame | TextFileReader
@overload
def read_csv(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -842,6 +858,10 @@ def read_csv(
float_precision: Literal["high", "legacy"] | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame | TextFileReader:
...
@@ -859,7 +879,8 @@ def read_csv(
)
)
def read_csv(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = lib.no_default,
delimiter: str | None | lib.NoDefault = None,
@@ -917,7 +938,19 @@ def read_csv(
float_precision: Literal["high", "legacy"] | None = None,
storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame | TextFileReader:
+ # validate input
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
+
if infer_datetime_format is not lib.no_default:
warnings.warn(
"The argument 'infer_datetime_format' is deprecated and will "
@@ -952,7 +985,8 @@ def read_csv(
# iterator=True -> TextFileReader
@overload
def read_table(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -1002,6 +1036,10 @@ def read_table(
float_precision: str | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> TextFileReader:
...
@@ -1009,7 +1047,8 @@ def read_table(
# chunksize=int -> TextFileReader
@overload
def read_table(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -1059,14 +1098,19 @@ def read_table(
float_precision: str | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> TextFileReader:
...
-# default -> DataFrame
+# default -> DataFrame hello
@overload
def read_table(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -1116,6 +1160,10 @@ def read_table(
float_precision: str | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame:
...
@@ -1123,7 +1171,8 @@ def read_table(
# Unions -> DataFrame | TextFileReader
@overload
def read_table(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = ...,
delimiter: str | None | lib.NoDefault = ...,
@@ -1173,6 +1222,10 @@ def read_table(
float_precision: str | None = ...,
storage_options: StorageOptions = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame | TextFileReader:
...
@@ -1192,7 +1245,8 @@ def read_table(
)
)
def read_table(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
sep: str | None | lib.NoDefault = lib.no_default,
delimiter: str | None | lib.NoDefault = None,
@@ -1250,7 +1304,19 @@ def read_table(
float_precision: str | None = None,
storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
) -> DataFrame | TextFileReader:
+ # validate input
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
+
if infer_datetime_format is not lib.no_default:
warnings.warn(
"The argument 'infer_datetime_format' is deprecated and will "
@@ -1284,12 +1350,17 @@ def read_table(
def read_fwf(
- filepath_or_buffer: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str],
+ path: FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str] | None = None,
+ /,
*,
colspecs: Sequence[tuple[int, int]] | str | None = "infer",
widths: Sequence[int] | None = None,
infer_nrows: int = 100,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
+ filepath_or_buffer: FilePath
+ | ReadCsvBuffer[bytes]
+ | ReadCsvBuffer[str]
+ | None = None,
**kwds,
) -> DataFrame | TextFileReader:
r"""
@@ -1355,6 +1426,12 @@ def read_fwf(
raise ValueError("Must specify either colspecs or widths")
if colspecs not in (None, "infer") and widths is not None:
raise ValueError("You must specify only one of 'widths' and 'colspecs'")
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
# Compute 'colspecs' from 'widths', if specified.
if widths is not None:
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index de9f1168e40dd..9145b849a3103 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -118,9 +118,11 @@ def to_pickle(
decompression_options=_shared_docs["decompression_options"] % "filepath_or_buffer",
)
def read_pickle(
- filepath_or_buffer: FilePath | ReadPickleBuffer,
+ path: FilePath | ReadPickleBuffer | None = None,
+ /,
compression: CompressionOptions = "infer",
storage_options: StorageOptions | None = None,
+ filepath_or_buffer: FilePath | ReadPickleBuffer | None = None,
) -> DataFrame | Series:
"""
Load pickled pandas object (or any object) from file.
@@ -185,6 +187,15 @@ def read_pickle(
3 3 8
4 4 9
"""
+
+ # validate input
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
+
excs_to_catch = (AttributeError, ImportError, ModuleNotFoundError, TypeError)
with get_handle(
filepath_or_buffer,
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 89c3f7bbc4f84..fb7ef5b8b6ce6 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -259,9 +259,10 @@ def _tables():
def to_hdf(
- path_or_buf: FilePath | HDFStore,
- key: str,
- value: DataFrame | Series,
+ path: FilePath | HDFStore | None = None,
+ /,
+ key: str | None = None,
+ value: DataFrame | Series | None = None,
mode: str = "a",
complevel: int | None = None,
complib: str | None = None,
@@ -274,8 +275,21 @@ def to_hdf(
data_columns: Literal[True] | list[str] | None = None,
errors: str = "strict",
encoding: str = "UTF-8",
+ path_or_buf: FilePath | HDFStore | None = None,
) -> None:
"""store this object, close it if we opened it"""
+ # validate input
+ if path_or_buf is None and path is None:
+ raise ValueError("you need to insert a path")
+ if path_or_buf is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if key is None:
+ raise TypeError("missing 1 required argument: 'key'")
+ if value is None:
+ raise TypeError("missing 1 required argument: 'value'")
+ if path is not None:
+ path_or_buf = path
+
if append:
f = lambda store: store.append(
key,
@@ -315,7 +329,8 @@ def to_hdf(
def read_hdf(
- path_or_buf: FilePath | HDFStore,
+ path: FilePath | HDFStore | None = None,
+ /,
key=None,
mode: str = "r",
errors: str = "strict",
@@ -325,6 +340,7 @@ def read_hdf(
columns: list[str] | None = None,
iterator: bool = False,
chunksize: int | None = None,
+ path_or_buf: FilePath | HDFStore | None = None,
**kwargs,
):
"""
@@ -393,6 +409,14 @@ def read_hdf(
>>> df.to_hdf('./store.h5', 'data') # doctest: +SKIP
>>> reread = pd.read_hdf('./store.h5') # doctest: +SKIP
"""
+ # validate Input
+ if path_or_buf is None and path is None:
+ raise ValueError("you need to insert a path")
+ if path_or_buf is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ path_or_buf = path
+
if mode not in ["r", "r+", "a"]:
raise ValueError(
f"mode {mode} is not allowed while performing a read. "
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 60b48bed8e124..5c168caeeab2b 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -59,7 +59,8 @@ def __exit__(
@overload
def read_sas(
- filepath_or_buffer: FilePath | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[bytes] | None = None,
+ /,
*,
format: str | None = ...,
index: Hashable | None = ...,
@@ -67,13 +68,15 @@ def read_sas(
chunksize: int = ...,
iterator: bool = ...,
compression: CompressionOptions = ...,
+ filepath_or_buffer: FilePath | ReadBuffer[bytes] | None = None,
) -> ReaderBase:
...
@overload
def read_sas(
- filepath_or_buffer: FilePath | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[bytes] | None = None,
+ /,
*,
format: str | None = ...,
index: Hashable | None = ...,
@@ -81,13 +84,15 @@ def read_sas(
chunksize: None = ...,
iterator: bool = ...,
compression: CompressionOptions = ...,
+ filepath_or_buffer: FilePath | ReadBuffer[bytes] | None = None,
) -> DataFrame | ReaderBase:
...
@doc(decompression_options=_shared_docs["decompression_options"] % "filepath_or_buffer")
def read_sas(
- filepath_or_buffer: FilePath | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[bytes] | None = None,
+ /,
*,
format: str | None = None,
index: Hashable | None = None,
@@ -95,6 +100,7 @@ def read_sas(
chunksize: int | None = None,
iterator: bool = False,
compression: CompressionOptions = "infer",
+ filepath_or_buffer: FilePath | ReadBuffer[bytes] | None = None,
) -> DataFrame | ReaderBase:
"""
Read SAS files stored as either XPORT or SAS7BDAT format files.
@@ -137,6 +143,15 @@ def read_sas(
--------
>>> df = pd.read_sas("sas_data.sas7bdat") # doctest: +SKIP
"""
+
+ # validate input
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
+
if format is None:
buffer_error_msg = (
"If this is a buffer object rather "
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index c5648a022d4a9..6d3ca7180fc9f 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2117,7 +2117,7 @@ def value_labels(self) -> dict[str, dict[float, str]]:
@Appender(_read_stata_doc)
def read_stata(
- filepath_or_buffer: FilePath | ReadBuffer[bytes],
+ path: FilePath | ReadBuffer[bytes] | None = None,
*,
convert_dates: bool = True,
convert_categoricals: bool = True,
@@ -2130,7 +2130,16 @@ def read_stata(
iterator: bool = False,
compression: CompressionOptions = "infer",
storage_options: StorageOptions | None = None,
+ filepath_or_buffer: FilePath | ReadBuffer[bytes] | None = None,
) -> DataFrame | StataReader:
+ # validate input
+ if filepath_or_buffer is None and path is None:
+ raise ValueError("you need to insert a path")
+ if filepath_or_buffer is not None and path is not None:
+ raise ValueError("pass the path as the first argument and don't pass it twice")
+ if path is not None:
+ filepath_or_buffer = path
+
reader = StataReader(
filepath_or_buffer,
convert_dates=convert_dates,
| - [x] part of #54616
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54621 | 2023-08-18T12:28:13Z | 2024-01-31T18:50:10Z | null | 2024-01-31T18:50:10Z |
BUG: Prevent OutOfBoundsDatetime error for constructing tz-aware series from list | diff --git a/doc/source/whatsnew/v2.1.2.rst b/doc/source/whatsnew/v2.1.2.rst
index ed8010c2ea258..d0945c1e530fe 100644
--- a/doc/source/whatsnew/v2.1.2.rst
+++ b/doc/source/whatsnew/v2.1.2.rst
@@ -34,6 +34,7 @@ Bug fixes
- Fixed bug in :meth:`Series.floordiv` for :class:`ArrowDtype` (:issue:`55561`)
- Fixed bug in :meth:`Series.rank` for ``string[pyarrow_numpy]`` dtype (:issue:`55362`)
- Fixed bug in :meth:`Series.str.extractall` for :class:`ArrowDtype` dtype being converted to object (:issue:`53846`)
+- Fixed bug in constructing :class:`Series` when dtype is a timezone aware datetime with non-nanosecond resolution raising ``OutOfBoundsDatetime`` (:issue:`54620`)
- Silence ``Period[B]`` warnings introduced by :issue:`53446` during normal plotting activity (:issue:`55138`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/tslib.pyi b/pandas/_libs/tslib.pyi
index 9819b5173db56..35a5c2626f102 100644
--- a/pandas/_libs/tslib.pyi
+++ b/pandas/_libs/tslib.pyi
@@ -28,5 +28,5 @@ def array_to_datetime(
# returned ndarray may be object dtype or datetime64[ns]
def array_to_datetime_with_tz(
- values: npt.NDArray[np.object_], tz: tzinfo
+ values: npt.NDArray[np.object_], tz: tzinfo, unit: str = ...
) -> npt.NDArray[np.int64]: ...
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 4989feaf84006..93c020a46cbee 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -678,7 +678,7 @@ cdef _array_to_datetime_object(
return oresult_nd, None
-def array_to_datetime_with_tz(ndarray values, tzinfo tz):
+def array_to_datetime_with_tz(ndarray values, tzinfo tz, unit="ns"):
"""
Vectorized analogue to pd.Timestamp(value, tz=tz)
@@ -714,7 +714,7 @@ def array_to_datetime_with_tz(ndarray values, tzinfo tz):
else:
# datetime64, tznaive pydatetime, int, float
ts = ts.tz_localize(tz)
- ts = ts.as_unit("ns")
+ ts = ts.as_unit(unit)
ival = ts._value
# Analogous to: result[i] = ival
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 3cf4dde3015c9..d4fe03fa5ae3f 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -348,7 +348,7 @@ def _from_sequence_not_strict(
# DatetimeTZDtype
unit = dtype.unit
- subarr, tz, inferred_freq = _sequence_to_dt64ns(
+ subarr, tz, inferred_freq = _sequence_to_dt64(
data,
copy=copy,
tz=tz,
@@ -2172,7 +2172,7 @@ def std(
# Constructor Helpers
-def _sequence_to_dt64ns(
+def _sequence_to_dt64(
data,
*,
copy: bool = False,
@@ -2198,7 +2198,8 @@ def _sequence_to_dt64ns(
Returns
-------
result : numpy.ndarray
- The sequence converted to a numpy array with dtype ``datetime64[ns]``.
+ The sequence converted to a numpy array with dtype ``datetime64[unit]``.
+ Where `unit` is ns unless specified otherwise by `out_unit`.
tz : tzinfo or None
Either the user-provided tzinfo or one inferred from the data.
inferred_freq : Tick or None
@@ -2221,9 +2222,9 @@ def _sequence_to_dt64ns(
data, copy = maybe_convert_dtype(data, copy, tz=tz)
data_dtype = getattr(data, "dtype", None)
- out_dtype = DT64NS_DTYPE
- if out_unit is not None:
- out_dtype = np.dtype(f"M8[{out_unit}]")
+ if out_unit is None:
+ out_unit = "ns"
+ out_dtype = np.dtype(f"M8[{out_unit}]")
if data_dtype == object or is_string_dtype(data_dtype):
# TODO: We do not have tests specific to string-dtypes,
@@ -2234,8 +2235,8 @@ def _sequence_to_dt64ns(
elif tz is not None and ambiguous == "raise":
# TODO: yearfirst/dayfirst/etc?
obj_data = np.asarray(data, dtype=object)
- i8data = tslib.array_to_datetime_with_tz(obj_data, tz)
- return i8data.view(DT64NS_DTYPE), tz, None
+ i8data = tslib.array_to_datetime_with_tz(obj_data, tz, out_unit)
+ return i8data.view(out_dtype), tz, None
else:
# data comes back here as either i8 to denote UTC timestamps
# or M8[ns] to denote wall times
diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py
index e513457819eb5..8845ab928e252 100644
--- a/pandas/tests/arrays/datetimes/test_constructors.py
+++ b/pandas/tests/arrays/datetimes/test_constructors.py
@@ -8,7 +8,7 @@
import pandas as pd
import pandas._testing as tm
from pandas.core.arrays import DatetimeArray
-from pandas.core.arrays.datetimes import _sequence_to_dt64ns
+from pandas.core.arrays.datetimes import _sequence_to_dt64
class TestDatetimeArrayConstructor:
@@ -44,7 +44,7 @@ def test_freq_validation(self):
"meth",
[
DatetimeArray._from_sequence,
- _sequence_to_dt64ns,
+ _sequence_to_dt64,
pd.to_datetime,
pd.DatetimeIndex,
],
@@ -105,7 +105,7 @@ def test_bool_dtype_raises(self):
DatetimeArray._from_sequence(arr)
with pytest.raises(TypeError, match=msg):
- _sequence_to_dt64ns(arr)
+ _sequence_to_dt64(arr)
with pytest.raises(TypeError, match=msg):
pd.DatetimeIndex(arr)
@@ -160,8 +160,8 @@ def test_2d(self, order):
if order == "F":
arr = arr.T
- res = _sequence_to_dt64ns(arr)
- expected = _sequence_to_dt64ns(arr.ravel())
+ res = _sequence_to_dt64(arr)
+ expected = _sequence_to_dt64(arr.ravel())
tm.assert_numpy_array_equal(res[0].ravel(), expected[0])
assert res[1] == expected[1]
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 3f91b9b03e1de..6d756b11df75e 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -27,7 +27,7 @@
PeriodArray,
TimedeltaArray,
)
-from pandas.core.arrays.datetimes import _sequence_to_dt64ns
+from pandas.core.arrays.datetimes import _sequence_to_dt64
from pandas.core.arrays.timedeltas import sequence_to_td64ns
@@ -1314,7 +1314,7 @@ def test_from_pandas_array(dtype):
expected = cls._from_sequence(data)
tm.assert_extension_array_equal(result, expected)
- func = {"M8[ns]": _sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype]
+ func = {"M8[ns]": _sequence_to_dt64, "m8[ns]": sequence_to_td64ns}[dtype]
result = func(arr)[0]
expected = func(data)[0]
tm.assert_equal(result, expected)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 69f6d847f5b19..68fbf079aa472 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1148,6 +1148,15 @@ def test_constructor_with_datetime_tz(self):
result = DatetimeIndex(s, freq="infer")
tm.assert_index_equal(result, dr)
+ def test_constructor_with_datetime_tz_ms(self):
+ # GH#54620 explicit frequency
+ result = Series([Timestamp("2999-01-01")], dtype="datetime64[ms, US/Pacific]")
+ expected = Series(
+ np.array(["2999-01-01"], dtype="datetime64[ms]")
+ ).dt.tz_localize("US/Pacific")
+ tm.assert_series_equal(result, expected)
+ assert result.dtype == "datetime64[ms, US/Pacific]"
+
def test_constructor_with_datetime_tz4(self):
# inference
s = Series(
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 735c6131ba319..72e9b9f991cc9 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -23,7 +23,7 @@
DatetimeArray,
TimedeltaArray,
)
-from pandas.core.arrays.datetimes import _sequence_to_dt64ns
+from pandas.core.arrays.datetimes import _sequence_to_dt64
from pandas.core.arrays.timedeltas import sequence_to_td64ns
@@ -316,7 +316,7 @@ def test_from_obscure_array(dtype, array_likes):
result = cls._from_sequence(data)
tm.assert_extension_array_equal(result, expected)
- func = {"M8[ns]": _sequence_to_dt64ns, "m8[ns]": sequence_to_td64ns}[dtype]
+ func = {"M8[ns]": _sequence_to_dt64, "m8[ns]": sequence_to_td64ns}[dtype]
result = func(arr)[0]
expected = func(data)[0]
tm.assert_equal(result, expected)
| - [x] closes #54442
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Note that this PR only fixes the case where the units are explicitly specified and it is a timezone-aware datetime. The more general case(s) appears to be more complicated. | https://api.github.com/repos/pandas-dev/pandas/pulls/54620 | 2023-08-18T12:26:53Z | 2023-10-31T16:31:45Z | null | 2023-10-31T16:31:45Z |
Backport PR #54615 on branch 2.1.x (DOC: Update build instructions in the README) | diff --git a/README.md b/README.md
index 8ea473beb107e..6fa20d237babe 100644
--- a/README.md
+++ b/README.md
@@ -130,23 +130,17 @@ In the `pandas` directory (same one where you found this file after
cloning the git repo), execute:
```sh
-python setup.py install
+pip install .
```
or for installing in [development mode](https://pip.pypa.io/en/latest/cli/pip_install/#install-editable):
```sh
-python -m pip install -e . --no-build-isolation --no-use-pep517
+python -m pip install -ve . --no-build-isolation --config-settings=editable-verbose=true
```
-or alternatively
-
-```sh
-python setup.py develop
-```
-
-See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-from-source).
+See the full instructions for [installing from source](https://pandas.pydata.org/docs/dev/development/contributing_environment.html).
## License
[BSD 3](LICENSE)
| Backport PR #54615: DOC: Update build instructions in the README | https://api.github.com/repos/pandas-dev/pandas/pulls/54619 | 2023-08-18T10:25:10Z | 2023-08-18T12:42:26Z | 2023-08-18T12:42:26Z | 2023-08-18T12:42:26Z |
DOC: Add information on fetching tags for contributors building pandas. | diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst
index c06c0f8703d11..41f4b4d5783ea 100644
--- a/doc/source/development/contributing_codebase.rst
+++ b/doc/source/development/contributing_codebase.rst
@@ -754,7 +754,7 @@ install pandas) by typing::
your installation is probably fine and you can start contributing!
Often it is worth running only a subset of tests first around your changes before running the
-entire suite (tip: you can use the [pandas-coverage app](https://pandas-coverage-12d2130077bc.herokuapp.com/))
+entire suite (tip: you can use the `pandas-coverage app <https://pandas-coverage-12d2130077bc.herokuapp.com/>`_)
to find out which tests hit the lines of code you've modified, and then run only those).
The easiest way to do this is with::
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 51d0edf1859c5..0cc1fe2629e46 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -214,7 +214,7 @@ due to limitations in setuptools.
The newer build system, invokes the meson backend through pip (via a `PEP 517 <https://peps.python.org/pep-0517/>`_ build).
It automatically uses all available cores on your CPU, and also avoids the need for manual rebuilds by
-rebuilding automatically whenever pandas is imported(with an editable install).
+rebuilding automatically whenever pandas is imported (with an editable install).
For these reasons, you should compile pandas with meson.
Because the meson build system is newer, you may find bugs/minor issues as it matures. You can report these bugs
@@ -228,6 +228,14 @@ To compile pandas with meson, run::
# If you do not want to see this, omit everything after --no-build-isolation
python -m pip install -ve . --no-build-isolation --config-settings editable-verbose=true
+.. note::
+ The version number is pulled from the latest repository tag. Be sure to fetch the latest tags from upstream
+ before building::
+
+ # set the upstream repository, if not done already, and fetch the latest tags
+ git remote add upstream https://github.com/pandas-dev/pandas.git
+ git fetch upstream --tags
+
**Build options**
It is possible to pass options from the pip frontend to the meson backend if you would like to configure your
| - [N/A] closes #xxxx (Replace xxxx with the GitHub issue number)
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [N/A] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54618 | 2023-08-18T09:54:28Z | 2023-08-18T20:29:05Z | 2023-08-18T20:29:05Z | 2023-08-21T13:09:59Z |
DOC: Convert docstring to numpydoc. | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 6d419098bf279..1679e01ff9fe1 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -531,17 +531,24 @@ def convert_to_list_like(
def temp_setattr(
obj, attr: str, value, condition: bool = True
) -> Generator[None, None, None]:
- """Temporarily set attribute on an object.
-
- Args:
- obj: Object whose attribute will be modified.
- attr: Attribute to modify.
- value: Value to temporarily set attribute to.
- condition: Whether to set the attribute. Provided in order to not have to
- conditionally use this context manager.
+ """
+ Temporarily set attribute on an object.
- Yields:
- obj with modified attribute.
+ Parameters
+ ----------
+ obj : object
+ Object whose attribute will be modified.
+ attr : str
+ Attribute to modify.
+ value : Any
+ Value to temporarily set attribute to.
+ condition : bool, default True
+ Whether to set the attribute. Provided in order to not have to
+ conditionally use this context manager.
+
+ Yields
+ ------
+ object : obj with modified attribute.
"""
if condition:
old_value = getattr(obj, attr)
| - [N/A] closes #xxxx (Replace xxxx with the GitHub issue number) https://github.com/jorisvandenbossche/euroscipy-2023-pandas-sprint/issues/7
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54617 | 2023-08-18T09:34:47Z | 2023-08-18T16:49:39Z | 2023-08-18T16:49:39Z | 2023-08-21T12:59:36Z |
DOC: Update build instructions in the README | diff --git a/README.md b/README.md
index 8ea473beb107e..6fa20d237babe 100644
--- a/README.md
+++ b/README.md
@@ -130,23 +130,17 @@ In the `pandas` directory (same one where you found this file after
cloning the git repo), execute:
```sh
-python setup.py install
+pip install .
```
or for installing in [development mode](https://pip.pypa.io/en/latest/cli/pip_install/#install-editable):
```sh
-python -m pip install -e . --no-build-isolation --no-use-pep517
+python -m pip install -ve . --no-build-isolation --config-settings=editable-verbose=true
```
-or alternatively
-
-```sh
-python setup.py develop
-```
-
-See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-from-source).
+See the full instructions for [installing from source](https://pandas.pydata.org/docs/dev/development/contributing_environment.html).
## License
[BSD 3](LICENSE)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54615 | 2023-08-18T07:46:53Z | 2023-08-18T10:25:02Z | 2023-08-18T10:25:02Z | 2023-08-18T10:25:03Z |
Backport PR #54587 on branch 2.1.x (CI: Enable MacOS Python Dev tests) | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 66d8320206429..030c9546fecca 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -77,16 +77,12 @@ jobs:
env_file: actions-311-numpydev.yaml
pattern: "not slow and not network and not single_cpu"
test_args: "-W error::DeprecationWarning -W error::FutureWarning"
- # TODO(cython3): Re-enable once next-beta(after beta 1) comes out
- # There are some warnings failing the build with -werror
- pandas_ci: "0"
- name: "Pyarrow Nightly"
env_file: actions-311-pyarrownightly.yaml
pattern: "not slow and not network and not single_cpu"
fail-fast: false
name: ${{ matrix.name || format('ubuntu-latest {0}', matrix.env_file) }}
env:
- ENV_FILE: ci/deps/${{ matrix.env_file }}
PATTERN: ${{ matrix.pattern }}
EXTRA_APT: ${{ matrix.extra_apt || '' }}
LANG: ${{ matrix.lang || 'C.UTF-8' }}
@@ -150,14 +146,13 @@ jobs:
- name: Generate extra locales
# These extra locales will be available for locale.setlocale() calls in tests
- run: |
- sudo locale-gen ${{ matrix.extra_loc }}
+ run: sudo locale-gen ${{ matrix.extra_loc }}
if: ${{ matrix.extra_loc }}
- name: Set up Conda
uses: ./.github/actions/setup-conda
with:
- environment-file: ${{ env.ENV_FILE }}
+ environment-file: ci/deps/${{ matrix.env_file }}
- name: Build Pandas
id: build
@@ -312,15 +307,14 @@ jobs:
# to the corresponding posix/windows-macos/sdist etc. workflows.
# Feel free to modify this comment as necessary.
#if: false # Uncomment this to freeze the workflow, comment it to unfreeze
+ defaults:
+ run:
+ shell: bash -eou pipefail {0}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
- # TODO: Disable macOS for now, Github Actions bug where python is not
- # symlinked correctly to 3.12
- # xref https://github.com/actions/setup-python/issues/701
- #os: [ubuntu-22.04, macOS-latest, windows-latest]
- os: [ubuntu-22.04, windows-latest]
+ os: [ubuntu-22.04, macOS-latest, windows-latest]
timeout-minutes: 180
@@ -345,22 +339,15 @@ jobs:
with:
python-version: '3.12-dev'
- - name: Install dependencies
+ - name: Build Environment
run: |
python --version
python -m pip install --upgrade pip setuptools wheel meson[ninja]==1.0.1 meson-python==0.13.1
python -m pip install --pre --extra-index-url https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy
python -m pip install versioneer[toml]
python -m pip install python-dateutil pytz tzdata cython hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-cov pytest-asyncio>=0.17
- python -m pip list
-
- - name: Build Pandas
- run: |
python -m pip install -ve . --no-build-isolation --no-index
+ python -m pip list
- - name: Build Version
- run: |
- python -c "import pandas; pandas.show_versions();"
-
- - name: Test
+ - name: Run Tests
uses: ./.github/actions/run-tests
| Backport PR #54587: CI: Enable MacOS Python Dev tests | https://api.github.com/repos/pandas-dev/pandas/pulls/54614 | 2023-08-18T07:31:51Z | 2023-08-18T12:48:45Z | 2023-08-18T12:48:45Z | 2023-08-18T12:48:45Z |
DEPR: deprecated nonkeyword arguments in to_json | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 4ad450c965464..2a51f2fa1cc50 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -93,6 +93,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c28ae86985896..7da41b890598d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2352,6 +2352,9 @@ def to_excel(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "path_or_buf"], name="to_json"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path_or_buf",
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index ff9b4acd96499..4ee9e1e2d1598 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1,7 +1,10 @@
import datetime
from datetime import timedelta
from decimal import Decimal
-from io import StringIO
+from io import (
+ BytesIO,
+ StringIO,
+)
import json
import os
import sys
@@ -2106,3 +2109,15 @@ def test_json_roundtrip_string_inference(orient):
columns=pd.Index(["col 1", "col 2"], dtype=pd.ArrowDtype(pa.string())),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_json_pos_args_deprecation():
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_json except for the "
+ r"argument 'path_or_buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ buf = BytesIO()
+ df.to_json(buf, "split")
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54613 | 2023-08-18T06:00:57Z | 2023-08-18T16:50:44Z | 2023-08-18T16:50:44Z | 2023-08-18T19:16:22Z |
DEPR: deprecated nonkeyword arguments in to_html | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index b90563ba43d83..0c4008a6ba48c 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -93,6 +93,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_markdown` except ``buf``. (:issue:`54229`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 25fc5bd6664f5..05c0db0c09376 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3134,6 +3134,9 @@ def to_html(
) -> str:
...
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "buf"], name="to_html"
+ )
@Substitution(
header_type="bool",
header="Whether to print column labels, default True",
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 3b5fe329c320c..5811485406b86 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -978,3 +978,14 @@ def test_to_html_empty_complex_array():
"</table>"
)
assert result == expected
+
+
+def test_to_html_pos_args_deprecation():
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_html except for the "
+ r"argument 'buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_html(None, None)
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54612 | 2023-08-18T05:34:25Z | 2023-08-21T18:35:26Z | 2023-08-21T18:35:26Z | 2023-08-21T18:35:52Z |
BUG: merge not always following documented sort behavior | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index e8e2b8d0ef908..abc3f5f5cb135 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -39,10 +39,49 @@ Notable bug fixes
These are bug fixes that might have notable behavior changes.
-.. _whatsnew_220.notable_bug_fixes.notable_bug_fix1:
+.. _whatsnew_220.notable_bug_fixes.merge_sort_behavior:
-notable_bug_fix1
-^^^^^^^^^^^^^^^^
+:func:`merge` and :meth:`DataFrame.join` now consistently follow documented sort behavior
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In previous versions of pandas, :func:`merge` and :meth:`DataFrame.join` did not
+always return a result that followed the documented sort behavior. pandas now
+follows the documented sort behavior in merge and join operations (:issue:`54611`).
+
+As documented, ``sort=True`` sorts the join keys lexicographically in the resulting
+:class:`DataFrame`. With ``sort=False``, the order of the join keys depends on the
+join type (``how`` keyword):
+
+- ``how="left"``: preserve the order of the left keys
+- ``how="right"``: preserve the order of the right keys
+- ``how="inner"``: preserve the order of the left keys
+- ``how="outer"``: sort keys lexicographically
+
+One example with changing behavior is inner joins with non-unique left join keys
+and ``sort=False``:
+
+.. ipython:: python
+
+ left = pd.DataFrame({"a": [1, 2, 1]})
+ right = pd.DataFrame({"a": [1, 2]})
+ result = pd.merge(left, right, how="inner", on="a", sort=False)
+
+*Old Behavior*
+
+.. code-block:: ipython
+
+ In [5]: result
+ Out[5]:
+ a
+ 0 1
+ 1 1
+ 2 2
+
+*New Behavior*
+
+.. ipython:: python
+
+ result
.. _whatsnew_220.notable_bug_fixes.notable_bug_fix2:
diff --git a/pandas/_libs/join.pyi b/pandas/_libs/join.pyi
index 7ee649a55fd8f..c7761cbf8aba9 100644
--- a/pandas/_libs/join.pyi
+++ b/pandas/_libs/join.pyi
@@ -6,6 +6,7 @@ def inner_join(
left: np.ndarray, # const intp_t[:]
right: np.ndarray, # const intp_t[:]
max_groups: int,
+ sort: bool = ...,
) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp]]: ...
def left_outer_join(
left: np.ndarray, # const intp_t[:]
diff --git a/pandas/_libs/join.pyx b/pandas/_libs/join.pyx
index 5929647468785..385a089a8c59d 100644
--- a/pandas/_libs/join.pyx
+++ b/pandas/_libs/join.pyx
@@ -23,7 +23,7 @@ from pandas._libs.dtypes cimport (
@cython.wraparound(False)
@cython.boundscheck(False)
def inner_join(const intp_t[:] left, const intp_t[:] right,
- Py_ssize_t max_groups):
+ Py_ssize_t max_groups, bint sort=True):
cdef:
Py_ssize_t i, j, k, count = 0
intp_t[::1] left_sorter, right_sorter
@@ -70,7 +70,20 @@ def inner_join(const intp_t[:] left, const intp_t[:] right,
_get_result_indexer(left_sorter, left_indexer)
_get_result_indexer(right_sorter, right_indexer)
- return np.asarray(left_indexer), np.asarray(right_indexer)
+ if not sort:
+ # if not asked to sort, revert to original order
+ if len(left) == len(left_indexer):
+ # no multiple matches for any row on the left
+ # this is a short-cut to avoid groupsort_indexer
+ # otherwise, the `else` path also works in this case
+ rev = np.empty(len(left), dtype=np.intp)
+ rev.put(np.asarray(left_sorter), np.arange(len(left)))
+ else:
+ rev, _ = groupsort_indexer(left_indexer, len(left))
+
+ return np.asarray(left_indexer).take(rev), np.asarray(right_indexer).take(rev)
+ else:
+ return np.asarray(left_indexer), np.asarray(right_indexer)
@cython.wraparound(False)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 889e138177fae..c52f9cae78c91 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -418,10 +418,10 @@
lkey value_x rkey value_y
0 foo 1 foo 5
1 foo 1 foo 8
-2 foo 5 foo 5
-3 foo 5 foo 8
-4 bar 2 bar 6
-5 baz 3 baz 7
+2 bar 2 bar 6
+3 baz 3 baz 7
+4 foo 5 foo 5
+5 foo 5 foo 8
Merge DataFrames df1 and df2 with specified left and right suffixes
appended to any overlapping columns.
@@ -431,10 +431,10 @@
lkey value_left rkey value_right
0 foo 1 foo 5
1 foo 1 foo 8
-2 foo 5 foo 5
-3 foo 5 foo 8
-4 bar 2 bar 6
-5 baz 3 baz 7
+2 bar 2 bar 6
+3 baz 3 baz 7
+4 foo 5 foo 5
+5 foo 5 foo 8
Merge DataFrames df1 and df2, but raise an exception if the DataFrames have
any overlapping columns.
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 3d2e381fc52ce..a49db84450bb3 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4633,7 +4633,7 @@ def join(
_validate_join_method(how)
if not self.is_unique and not other.is_unique:
- return self._join_non_unique(other, how=how)
+ return self._join_non_unique(other, how=how, sort=sort)
elif not self.is_unique or not other.is_unique:
if self.is_monotonic_increasing and other.is_monotonic_increasing:
# Note: 2023-08-15 we *do* have tests that get here with
@@ -4645,7 +4645,7 @@ def join(
# go through object dtype for ea till engine is supported properly
return self._join_monotonic(other, how=how)
else:
- return self._join_non_unique(other, how=how)
+ return self._join_non_unique(other, how=how, sort=sort)
elif (
# GH48504: exclude MultiIndex to avoid going through MultiIndex._values
self.is_monotonic_increasing
@@ -4679,15 +4679,13 @@ def _join_via_get_indexer(
elif how == "right":
join_index = other
elif how == "inner":
- # TODO: sort=False here for backwards compat. It may
- # be better to use the sort parameter passed into join
- join_index = self.intersection(other, sort=False)
+ join_index = self.intersection(other, sort=sort)
elif how == "outer":
# TODO: sort=True here for backwards compat. It may
# be better to use the sort parameter passed into join
join_index = self.union(other)
- if sort:
+ if sort and how in ["left", "right"]:
join_index = join_index.sort_values()
if join_index is self:
@@ -4784,7 +4782,7 @@ def _join_multi(self, other: Index, how: JoinHow):
@final
def _join_non_unique(
- self, other: Index, how: JoinHow = "left"
+ self, other: Index, how: JoinHow = "left", sort: bool = False
) -> tuple[Index, npt.NDArray[np.intp], npt.NDArray[np.intp]]:
from pandas.core.reshape.merge import get_join_indexers
@@ -4792,7 +4790,7 @@ def _join_non_unique(
assert self.dtype == other.dtype
left_idx, right_idx = get_join_indexers(
- [self._values], [other._values], how=how, sort=True
+ [self._values], [other._values], how=how, sort=sort
)
mask = left_idx == -1
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c2cb9d643ca87..140a3024a8684 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1679,6 +1679,9 @@ def get_join_indexers(
elif not sort and how in ["left", "outer"]:
return _get_no_sort_one_missing_indexer(left_n, False)
+ if not sort and how == "outer":
+ sort = True
+
# get left & right join labels and num. of levels at each location
mapped = (
_factorize_keys(left_keys[n], right_keys[n], sort=sort, how=how)
@@ -1697,7 +1700,7 @@ def get_join_indexers(
lkey, rkey, count = _factorize_keys(lkey, rkey, sort=sort, how=how)
# preserve left frame order if how == 'left' and sort == False
kwargs = {}
- if how in ("left", "right"):
+ if how in ("inner", "left", "right"):
kwargs["sort"] = sort
join_func = {
"inner": libjoin.inner_join,
diff --git a/pandas/tests/extension/base/reshaping.py b/pandas/tests/extension/base/reshaping.py
index 5d9c03e1b2569..a9bd12917e73e 100644
--- a/pandas/tests/extension/base/reshaping.py
+++ b/pandas/tests/extension/base/reshaping.py
@@ -236,9 +236,9 @@ def test_merge_on_extension_array_duplicates(self, data):
result = pd.merge(df1, df2, on="key")
expected = pd.DataFrame(
{
- "key": key.take([0, 0, 0, 0, 1]),
- "val_x": [1, 1, 3, 3, 2],
- "val_y": [1, 3, 1, 3, 2],
+ "key": key.take([0, 0, 1, 2, 2]),
+ "val_x": [1, 1, 2, 3, 3],
+ "val_y": [1, 3, 2, 1, 3],
}
)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexes/numeric/test_join.py b/pandas/tests/indexes/numeric/test_join.py
index 93ff6238b90ff..918d505216735 100644
--- a/pandas/tests/indexes/numeric/test_join.py
+++ b/pandas/tests/indexes/numeric/test_join.py
@@ -11,13 +11,13 @@ def test_join_non_unique(self):
joined, lidx, ridx = left.join(left, return_indexers=True)
- exp_joined = Index([3, 3, 3, 3, 4, 4, 4, 4])
+ exp_joined = Index([4, 4, 4, 4, 3, 3, 3, 3])
tm.assert_index_equal(joined, exp_joined)
- exp_lidx = np.array([2, 2, 3, 3, 0, 0, 1, 1], dtype=np.intp)
+ exp_lidx = np.array([0, 0, 1, 1, 2, 2, 3, 3], dtype=np.intp)
tm.assert_numpy_array_equal(lidx, exp_lidx)
- exp_ridx = np.array([2, 3, 2, 3, 0, 1, 0, 1], dtype=np.intp)
+ exp_ridx = np.array([0, 1, 0, 1, 2, 3, 2, 3], dtype=np.intp)
tm.assert_numpy_array_equal(ridx, exp_ridx)
def test_join_inner(self):
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 91510fae0a9b1..50a534ad36bcc 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1462,13 +1462,14 @@ def test_merge_readonly(self):
def _check_merge(x, y):
for how in ["inner", "left", "outer"]:
- result = x.join(y, how=how)
+ for sort in [True, False]:
+ result = x.join(y, how=how, sort=sort)
- expected = merge(x.reset_index(), y.reset_index(), how=how, sort=True)
- expected = expected.set_index("index")
+ expected = merge(x.reset_index(), y.reset_index(), how=how, sort=sort)
+ expected = expected.set_index("index")
- # TODO check_names on merge?
- tm.assert_frame_equal(result, expected, check_names=False)
+ # TODO check_names on merge?
+ tm.assert_frame_equal(result, expected, check_names=False)
class TestMergeDtypes:
@@ -1751,7 +1752,7 @@ def test_merge_string_dtype(self, how, expected_data, any_string_dtype):
"how, expected_data",
[
("inner", [[True, 1, 4], [False, 5, 3]]),
- ("outer", [[True, 1, 4], [False, 5, 3]]),
+ ("outer", [[False, 5, 3], [True, 1, 4]]),
("left", [[True, 1, 4], [False, 5, 3]]),
("right", [[False, 5, 3], [True, 1, 4]]),
],
@@ -2331,9 +2332,9 @@ def test_merge_suffix(col1, col2, kwargs, expected_cols):
"outer",
DataFrame(
{
- "A": [100, 200, 1, 300],
- "B1": [60, 70, 80, np.nan],
- "B2": [600, 700, np.nan, 800],
+ "A": [1, 100, 200, 300],
+ "B1": [80, 60, 70, np.nan],
+ "B2": [np.nan, 600, 700, 800],
}
),
),
@@ -2752,9 +2753,9 @@ def test_merge_outer_with_NaN(dtype):
result = merge(right, left, on="key", how="outer")
expected = DataFrame(
{
- "key": [np.nan, np.nan, 1, 2],
- "col2": [3, 4, np.nan, np.nan],
- "col1": [np.nan, np.nan, 1, 2],
+ "key": [1, 2, np.nan, np.nan],
+ "col2": [np.nan, np.nan, 3, 4],
+ "col1": [1, 2, np.nan, np.nan],
},
dtype=dtype,
)
@@ -2847,3 +2848,79 @@ def test_merge_multiindex_single_level():
result = df.merge(df2, left_on=["col"], right_index=True, how="left")
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("how", ["left", "right", "inner", "outer"])
+@pytest.mark.parametrize("sort", [True, False])
+@pytest.mark.parametrize("on_index", [True, False])
+@pytest.mark.parametrize("left_unique", [True, False])
+@pytest.mark.parametrize("left_monotonic", [True, False])
+@pytest.mark.parametrize("right_unique", [True, False])
+@pytest.mark.parametrize("right_monotonic", [True, False])
+def test_merge_combinations(
+ how, sort, on_index, left_unique, left_monotonic, right_unique, right_monotonic
+):
+ # GH 54611
+ left = [2, 3]
+ if left_unique:
+ left.append(4 if left_monotonic else 1)
+ else:
+ left.append(3 if left_monotonic else 2)
+
+ right = [2, 3]
+ if right_unique:
+ right.append(4 if right_monotonic else 1)
+ else:
+ right.append(3 if right_monotonic else 2)
+
+ left = DataFrame({"key": left})
+ right = DataFrame({"key": right})
+
+ if on_index:
+ left = left.set_index("key")
+ right = right.set_index("key")
+ on_kwargs = {"left_index": True, "right_index": True}
+ else:
+ on_kwargs = {"on": "key"}
+
+ result = merge(left, right, how=how, sort=sort, **on_kwargs)
+
+ if on_index:
+ left = left.reset_index()
+ right = right.reset_index()
+
+ if how in ["left", "right", "inner"]:
+ if how in ["left", "inner"]:
+ expected, other, other_unique = left, right, right_unique
+ else:
+ expected, other, other_unique = right, left, left_unique
+ if how == "inner":
+ keep_values = set(left["key"].values).intersection(right["key"].values)
+ keep_mask = expected["key"].isin(keep_values)
+ expected = expected[keep_mask]
+ if sort:
+ expected = expected.sort_values("key")
+ if not other_unique:
+ other_value_counts = other["key"].value_counts()
+ repeats = other_value_counts.reindex(expected["key"].values, fill_value=1)
+ repeats = repeats.astype(np.intp)
+ expected = expected["key"].repeat(repeats.values)
+ expected = expected.to_frame()
+ elif how == "outer":
+ if on_index and left_unique and left["key"].equals(right["key"]):
+ expected = DataFrame({"key": left["key"]})
+ else:
+ left_counts = left["key"].value_counts()
+ right_counts = right["key"].value_counts()
+ expected_counts = left_counts.mul(right_counts, fill_value=1)
+ expected_counts = expected_counts.astype(np.intp)
+ expected = expected_counts.index.values.repeat(expected_counts.values)
+ expected = DataFrame({"key": expected})
+ expected = expected.sort_values("key")
+
+ if on_index:
+ expected = expected.set_index("key")
+ else:
+ expected = expected.reset_index(drop=True)
+
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/merge/test_multi.py b/pandas/tests/reshape/merge/test_multi.py
index 088d1e7e3c85e..ab010bdb909f1 100644
--- a/pandas/tests/reshape/merge/test_multi.py
+++ b/pandas/tests/reshape/merge/test_multi.py
@@ -741,10 +741,8 @@ def test_join_multi_levels2(self):
expected = (
DataFrame(
{
- "household_id": [1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4],
+ "household_id": [2, 2, 2, 3, 3, 3, 3, 3, 3, 1, 2, 4],
"asset_id": [
- "nl0000301109",
- "nl0000301109",
"gb00b03mlx29",
"gb00b03mlx29",
"gb00b03mlx29",
@@ -754,11 +752,11 @@ def test_join_multi_levels2(self):
"lu0197800237",
"lu0197800237",
"nl0000289965",
+ "nl0000301109",
+ "nl0000301109",
None,
],
"t": [
- None,
- None,
233,
234,
235,
@@ -769,10 +767,10 @@ def test_join_multi_levels2(self):
181,
None,
None,
+ None,
+ None,
],
"share": [
- 1.0,
- 0.4,
0.6,
0.6,
0.6,
@@ -783,10 +781,10 @@ def test_join_multi_levels2(self):
0.6,
0.25,
1.0,
+ 0.4,
+ 1.0,
],
"log_return": [
- None,
- None,
0.09604978,
-0.06524096,
0.03532373,
@@ -797,6 +795,8 @@ def test_join_multi_levels2(self):
0.036997,
None,
None,
+ None,
+ None,
],
}
)
diff --git a/pandas/tests/strings/test_cat.py b/pandas/tests/strings/test_cat.py
index a6303610b2037..3e620b7664335 100644
--- a/pandas/tests/strings/test_cat.py
+++ b/pandas/tests/strings/test_cat.py
@@ -106,7 +106,7 @@ def test_str_cat_categorical(index_or_series, dtype_caller, dtype_target, sep):
# Series/Index with Series having different Index
t = Series(t.values, index=t.values)
- expected = Index(["aa", "aa", "aa", "bb", "bb"])
+ expected = Index(["aa", "aa", "bb", "bb", "aa"])
expected = expected if box == Index else Series(expected, index=expected.str[:1])
result = s.str.cat(t, sep=sep)
| - [x] closes #18776
- [x] closes #24730
- [x] closes #33554
- [x] closes #40608
- [x] closes #48021
- [x] closes #53157
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
When merging dataframes there are a number of different code paths hit depending on arguments passed (e.g. how, sort, on index vs columns) as well as left/right characteristics (e.g. unique, monotonic)
The resulting sort behavior is not always consistent and does not always align with documented behavior.
The docs state:
```
sort: bool, default False
Sort the join keys lexicographically in the result DataFrame. If False, the order
of the join keys depends on the join type (how keyword).
...
left: preserve the order of the left keys
right: preserve the order of the right keys
outer: sort keys lexicographically
inner: preserve the order of the left keys
```
This MR aims to fix the sort behavior for cases where it does not follow documented behavior and add tests to validate sort behavior across a wide range of arguments.
NOTE: a few existing tests that relied on incorrect sort behavior were updated.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54611 | 2023-08-18T03:21:38Z | 2023-08-23T00:30:46Z | 2023-08-23T00:30:46Z | 2023-09-06T00:54:01Z |
improved explanation of linear interpolation in quantile | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 84b102bd4a262..1cfe83b2e4714 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -1373,6 +1373,25 @@ class DateOffset(RelativeDeltaOffset, metaclass=OffsetMeta):
previous midnight.
**kwds
Temporal parameter that add to or replace the offset value.
+ weekday : int {0, 1, ..., 6}, default 0
+
+ A specific integer for the day of the week.
+ - 0 is Monday
+ - 1 is Tuesday
+ - 2 is Wednesday
+ - 3 is Thursday
+ - 4 is Friday
+ - 5 is Saturday
+ - 6 is Sunday
+
+ Instead Weekday type from dateutil.relativedelta can be used.
+ - MO is Monday
+ - TU is Tuesday
+ - WE is Wednesday
+ - TH is Thursday
+ - FR is Friday
+ - SA is Saturday
+ - SU is Sunday.
Parameters that **add** to the offset (like Timedelta):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 079366a942f8e..ffb02790b027d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2683,13 +2683,18 @@ def quantile(
This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:
- * linear: `i + (j - i) * fraction`, where `fraction` is the
- fractional part of the index surrounded by `i` and `j`.
+ * linear: `i + (j - i) * fraction`, where `fraction` is the proportion
+ of the distance between `i` and `j`. it refers to the relative position
+ of the desired quantile value between i and j
+ hence fraction = (desired_quantile - i) / (j - i)
* lower: `i`.
* higher: `j`.
* nearest: `i` or `j` whichever is nearest.
* midpoint: (`i` + `j`) / 2.
+ For example, if (len(self) - 1) * q is 9.6, and the elements
+ at indices 9 and 10 are 3 and 4, return 0.4 * 3 + 0.6 * 4 = 3.6
+
Returns
-------
float or Series
| - [ ] closes #51745
| https://api.github.com/repos/pandas-dev/pandas/pulls/54610 | 2023-08-18T02:05:42Z | 2023-08-21T19:14:52Z | null | 2023-08-21T19:14:53Z |
Fixes NonExistentTimeError when resampling weekly | diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 9b8d1c870091d..be8d30bf54ad0 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -13,6 +13,7 @@
import warnings
import numpy as np
+from pytz import NonExistentTimeError
from pandas._libs import lib
from pandas._libs.tslibs import (
@@ -2494,6 +2495,17 @@ def _get_timestamp_range_edges(
first = first.tz_localize(index_tz)
last = last.tz_localize(index_tz)
else:
+ # Added to handle non-existent times when localizing to
+ # time-zones with negative time-zone difference
+ try:
+ first = first.tz_localize(None)
+ except NonExistentTimeError:
+ first = first.tz_localize(None, nonexistent="shift_forward")
+
+ try:
+ last = last.tz_localize(None)
+ except NonExistentTimeError:
+ last = last.tz_localize(None, nonexistent="shift_forward")
if isinstance(origin, Timestamp):
first = origin
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index 62b0bc2012af1..20f6c3d58de66 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -1,3 +1,4 @@
+from datetime import datetime
from textwrap import dedent
import numpy as np
@@ -696,3 +697,19 @@ def test_groupby_resample_kind(kind):
)
expected = Series([1, 3, 2, 4], index=expected_index, name="value")
tm.assert_series_equal(result, expected)
+
+
+def test_resample_dataframe_with_negative_timezone_difference():
+ df = DataFrame({"ts": [datetime(2005, 10, 9, 21)], "values": [10.0]})
+ df["ts"] = df["ts"].dt.tz_localize("Chile/Continental")
+ result = df.resample("W-Mon", on="ts", closed="left", label="left").sum()
+ expected = DataFrame(
+ {
+ "values": {
+ Timestamp("2005-10-03 00:00:00-0400", tz="Chile/Continental"): 10.0
+ }
+ }
+ )
+ expected.index.name = "ts"
+ expected.index.freq = "W-Mon"
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #53666
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54609 | 2023-08-18T01:53:24Z | 2023-09-11T16:38:43Z | null | 2023-09-11T16:38:44Z |
DOC: add examples to offsets classes: FY5253, FY5253Quarter | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 958fe1181d309..9be582be3caa8 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3540,6 +3540,9 @@ cdef class FY5253(FY5253Mixin):
Parameters
----------
n : int
+ The number of fiscal years represented.
+ normalize : bool, default False
+ Normalize start/end dates to midnight before generating date range.
weekday : int {0, 1, ..., 6}, default 0
A specific integer for the day of the week.
@@ -3562,11 +3565,31 @@ cdef class FY5253(FY5253Mixin):
- "nearest" means year end is **weekday** closest to last day of month in year.
- "last" means year end is final **weekday** of the final month in fiscal year.
+ See Also
+ --------
+ :class:`~pandas.tseries.offsets.DateOffset` : Standard kind of date increment.
+
Examples
--------
+ In the example below the default parameters give the next 52-53 week fiscal year.
+
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.FY5253()
Timestamp('2022-01-31 00:00:00')
+
+ By the parameter ``startingMonth`` we can specify
+ the month in which fiscal years end.
+
+ >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts + pd.offsets.FY5253(startingMonth=3)
+ Timestamp('2022-03-28 00:00:00')
+
+ 52-53 week fiscal year can be specified by
+ ``weekday`` and ``variation`` parameters.
+
+ >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts + pd.offsets.FY5253(weekday=5, startingMonth=12, variation="last")
+ Timestamp('2022-12-31 00:00:00')
"""
_prefix = "RE"
@@ -3720,6 +3743,9 @@ cdef class FY5253Quarter(FY5253Mixin):
Parameters
----------
n : int
+ The number of business quarters represented.
+ normalize : bool, default False
+ Normalize start/end dates to midnight before generating date range.
weekday : int {0, 1, ..., 6}, default 0
A specific integer for the day of the week.
@@ -3745,11 +3771,32 @@ cdef class FY5253Quarter(FY5253Mixin):
- "nearest" means year end is **weekday** closest to last day of month in year.
- "last" means year end is final **weekday** of the final month in fiscal year.
+ See Also
+ --------
+ :class:`~pandas.tseries.offsets.DateOffset` : Standard kind of date increment.
+
Examples
--------
+ In the example below the default parameters give
+ the next business quarter for 52-53 week fiscal year.
+
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.FY5253Quarter()
Timestamp('2022-01-31 00:00:00')
+
+ By the parameter ``startingMonth`` we can specify
+ the month in which fiscal years end.
+
+ >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts + pd.offsets.FY5253Quarter(startingMonth=3)
+ Timestamp('2022-03-28 00:00:00')
+
+ Business quarters for 52-53 week fiscal year can be specified by
+ ``weekday`` and ``variation`` parameters.
+
+ >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts + pd.offsets.FY5253Quarter(weekday=5, startingMonth=12, variation="last")
+ Timestamp('2022-04-02 00:00:00')
"""
_prefix = "REQ"
| xref #52431
added examples, see also section, and missing parameters to offsets classes `FY5253,` `FY5253Quarter`
| https://api.github.com/repos/pandas-dev/pandas/pulls/54608 | 2023-08-17T21:59:04Z | 2023-08-21T19:23:04Z | 2023-08-21T19:23:04Z | 2023-08-21T19:23:11Z |
STY: Enable ruff's flynt, flake8-logging-format | diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index ce59985b8f352..b1a3504d46b27 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -633,7 +633,7 @@ def write_style(self) -> None:
else:
element_props.append(("thead th", "text-align", "right"))
template_mid = "\n\n".join(template_select % t for t in element_props)
- template = dedent("\n".join((template_first, template_mid, template_last)))
+ template = dedent(f"{template_first}\n{template_mid}\n{template_last}")
self.write(template)
def render(self) -> list[str]:
diff --git a/pandas/io/formats/string.py b/pandas/io/formats/string.py
index 769f9dee1c31a..cdad388592717 100644
--- a/pandas/io/formats/string.py
+++ b/pandas/io/formats/string.py
@@ -28,7 +28,7 @@ def __init__(self, fmt: DataFrameFormatter, line_width: int | None = None) -> No
def to_string(self) -> str:
text = self._get_string_representation()
if self.fmt.should_show_dimensions:
- text = "".join([text, self.fmt.dimensions_info])
+ text = f"{text}{self.fmt.dimensions_info}"
return text
def _get_strcols(self) -> list[list[str]]:
diff --git a/pyproject.toml b/pyproject.toml
index c28f9259c749c..0fc753debe030 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -229,7 +229,7 @@ select = [
# flake8-gettext
"INT",
# pylint
- "PLC", "PLE", "PLR", "PLW",
+ "PL",
# misc lints
"PIE",
# flake8-pyi
@@ -252,6 +252,10 @@ select = [
"NPY002",
# Perflint
"PERF",
+ # flynt
+ "FLY",
+ # flake8-logging-format
+ "G",
]
ignore = [
@@ -356,7 +360,7 @@ exclude = [
"asv_bench/*" = ["TID", "NPY002"]
# to be enabled gradually
"pandas/core/*" = ["PLR5501"]
-"pandas/tests/*" = ["B028"]
+"pandas/tests/*" = ["B028", "FLY"]
"scripts/*" = ["B028"]
# Keep this one enabled
"pandas/_typing.py" = ["TCH"]
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index aab29fce89abe..ffe1e9acd1185 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -110,10 +110,10 @@ def _import_path(self, klass=None, func=None):
base_path = "scripts.tests.test_validate_docstrings"
if klass:
- base_path = ".".join([base_path, klass])
+ base_path = f"{base_path}.{klass}"
if func:
- base_path = ".".join([base_path, func])
+ base_path = f"{base_path}.{func}"
return base_path
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index b1b63b469ec3b..0a6a852bb0f85 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -143,7 +143,7 @@ def get_api_items(api_doc_fd):
func = getattr(func, part)
yield (
- ".".join([current_module, line_stripped]),
+ f"{current_module}.{line_stripped}",
func,
current_section,
current_subsection,
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54607 | 2023-08-17T21:51:53Z | 2023-08-21T19:22:20Z | 2023-08-21T19:22:20Z | 2023-08-21T19:22:23Z |
CLN: Simplify merging.rst | diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index 9efa7df3ff669..34d04745ccdb5 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -832,9 +832,6 @@ The following table summarizes the results of merging ``Categoricals``:
| category (int) | category (float) | False | float (dtype is inferred) |
+-------------------+------------------------+----------------------+-----------------------------+
-See also the section on :ref:`merge dtypes<merging.dtypes>` for notes about
-preserving merge dtypes and performance.
-
.. _categorical.union:
Unioning
diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst
index 10793a6973f8a..3e0e3245e8d64 100644
--- a/doc/source/user_guide/merging.rst
+++ b/doc/source/user_guide/merging.rst
@@ -15,27 +15,27 @@
Merge, join, concatenate and compare
************************************
-pandas provides various facilities for easily combining together Series or
-DataFrame with various kinds of set logic for the indexes
-and relational algebra functionality in the case of join / merge-type
-operations.
+pandas provides various methods for combining and comparing :class:`Series` or
+:class:`DataFrame`.
-In addition, pandas also provides utilities to compare two Series or DataFrame
-and summarize their differences.
+* :func:`~pandas.concat`: Merge multiple :class:`Series` or :class:`DataFrame` objects along a shared index or column
+* :meth:`DataFrame.join`: Merge multiple :class:`DataFrame` objects along the columns
+* :meth:`DataFrame.combine_first`: Update missing values with non-missing values in the same location
+* :func:`~pandas.merge`: Combine two :class:`Series` or :class:`DataFrame` objects with SQL-style joining
+* :func:`~pandas.merge_ordered`: Combine two :class:`Series` or :class:`DataFrame` objects along an ordered axis
+* :func:`~pandas.merge_asof`: Combine two :class:`Series` or :class:`DataFrame` objects by near instead of exact matching keys
+* :meth:`Series.compare` and :meth:`DataFrame.compare`: Show differences in values between two :class:`Series` or :class:`DataFrame` objects
.. _merging.concat:
-Concatenating objects
----------------------
-
-The :func:`~pandas.concat` function (in the main pandas namespace) does all of
-the heavy lifting of performing concatenation operations along an axis while
-performing optional set logic (union or intersection) of the indexes (if any) on
-the other axes. Note that I say "if any" because there is only a single possible
-axis of concatenation for Series.
+:func:`~pandas.concat`
+----------------------
-Before diving into all of the details of ``concat`` and what it can do, here is
-a simple example:
+The :func:`~pandas.concat` function concatenates an arbitrary amount of
+:class:`Series` or :class:`DataFrame` objects along an axis while
+performing optional set logic (union or intersection) of the indexes on
+the other axes. Like ``numpy.concatenate``, :func:`~pandas.concat`
+takes a list or dict of homogeneously-typed objects and concatenates them.
.. ipython:: python
@@ -71,6 +71,7 @@ a simple example:
frames = [df1, df2, df3]
result = pd.concat(frames)
+ result
.. ipython:: python
:suppress:
@@ -79,81 +80,12 @@ a simple example:
p.plot(frames, result, labels=["df1", "df2", "df3"], vertical=True);
plt.close("all");
-Like its sibling function on ndarrays, ``numpy.concatenate``, ``pandas.concat``
-takes a list or dict of homogeneously-typed objects and concatenates them with
-some configurable handling of "what to do with the other axes":
-
-::
-
- pd.concat(
- objs,
- axis=0,
- join="outer",
- ignore_index=False,
- keys=None,
- levels=None,
- names=None,
- verify_integrity=False,
- copy=True,
- )
-
-* ``objs`` : a sequence or mapping of Series or DataFrame objects. If a
- dict is passed, the sorted keys will be used as the ``keys`` argument, unless
- it is passed, in which case the values will be selected (see below). Any None
- objects will be dropped silently unless they are all None in which case a
- ValueError will be raised.
-* ``axis`` : {0, 1, ...}, default 0. The axis to concatenate along.
-* ``join`` : {'inner', 'outer'}, default 'outer'. How to handle indexes on
- other axis(es). Outer for union and inner for intersection.
-* ``ignore_index`` : boolean, default False. If True, do not use the index
- values on the concatenation axis. The resulting axis will be labeled 0, ...,
- n - 1. This is useful if you are concatenating objects where the
- concatenation axis does not have meaningful indexing information. Note
- the index values on the other axes are still respected in the join.
-* ``keys`` : sequence, default None. Construct hierarchical index using the
- passed keys as the outermost level. If multiple levels passed, should
- contain tuples.
-* ``levels`` : list of sequences, default None. Specific levels (unique values)
- to use for constructing a MultiIndex. Otherwise they will be inferred from the
- keys.
-* ``names`` : list, default None. Names for the levels in the resulting
- hierarchical index.
-* ``verify_integrity`` : boolean, default False. Check whether the new
- concatenated axis contains duplicates. This can be very expensive relative
- to the actual data concatenation.
-* ``copy`` : boolean, default True. If False, do not copy data unnecessarily.
-
-Without a little bit of context many of these arguments don't make much sense.
-Let's revisit the above example. Suppose we wanted to associate specific keys
-with each of the pieces of the chopped up DataFrame. We can do this using the
-``keys`` argument:
-
-.. ipython:: python
-
- result = pd.concat(frames, keys=["x", "y", "z"])
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_concat_keys.png
- p.plot(frames, result, labels=["df1", "df2", "df3"], vertical=True)
- plt.close("all");
-
-As you can see (if you've read the rest of the documentation), the resulting
-object's index has a :ref:`hierarchical index <advanced.hierarchical>`. This
-means that we can now select out each chunk by key:
-
-.. ipython:: python
-
- result.loc["y"]
-
-It's not a stretch to see how this can be very useful. More detail on this
-functionality below.
-
.. note::
- It is worth noting that :func:`~pandas.concat` makes a full copy of the data, and that constantly
- reusing this function can create a significant performance hit. If you need
- to use the operation over several datasets, use a list comprehension.
+
+ :func:`~pandas.concat` makes a full copy of the data, and iteratively
+ reusing :func:`~pandas.concat` can create unnecessary copies. Collect all
+ :class:`DataFrame` or :class:`Series` objects in a list before using
+ :func:`~pandas.concat`.
.. code-block:: python
@@ -162,26 +94,20 @@ functionality below.
.. note::
- When concatenating DataFrames with named axes, pandas will attempt to preserve
+ When concatenating :class:`DataFrame` with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for :class:`MultiIndex`,
but the logic is applied separately on a level-by-level basis.
-Set logic on the other axes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Joining logic of the resulting axis
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-When gluing together multiple DataFrames, you have a choice of how to handle
-the other axes (other than the one being concatenated). This can be done in
-the following two ways:
+The ``join`` keyword specifies how to handle axis values that don't exist in the first
+:class:`DataFrame`.
-* Take the union of them all, ``join='outer'``. This is the default
- option as it results in zero information loss.
-* Take the intersection, ``join='inner'``.
-
-Here is an example of each of these methods. First, the default ``join='outer'``
-behavior:
+``join='outer'`` takes the union of all axis values
.. ipython:: python
@@ -194,6 +120,7 @@ behavior:
index=[2, 3, 6, 7],
)
result = pd.concat([df1, df4], axis=1)
+ result
.. ipython:: python
@@ -203,11 +130,12 @@ behavior:
p.plot([df1, df4], result, labels=["df1", "df4"], vertical=False);
plt.close("all");
-Here is the same thing with ``join='inner'``:
+``join='inner'`` takes the intersection of the axis values
.. ipython:: python
result = pd.concat([df1, df4], axis=1, join="inner")
+ result
.. ipython:: python
:suppress:
@@ -216,18 +144,13 @@ Here is the same thing with ``join='inner'``:
p.plot([df1, df4], result, labels=["df1", "df4"], vertical=False);
plt.close("all");
-Lastly, suppose we just wanted to reuse the *exact index* from the original
-DataFrame:
+To perform an effective "left" join using the *exact index* from the original
+:class:`DataFrame`, result can be reindexed.
.. ipython:: python
result = pd.concat([df1, df4], axis=1).reindex(df1.index)
-
-Similarly, we could index before the concatenation:
-
-.. ipython:: python
-
- pd.concat([df1, df4.reindex(df1.index)], axis=1)
+ result
.. ipython:: python
:suppress:
@@ -240,13 +163,14 @@ Similarly, we could index before the concatenation:
Ignoring indexes on the concatenation axis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-For ``DataFrame`` objects which don't have a meaningful index, you may wish
-to append them and ignore the fact that they may have overlapping indexes. To
-do this, use the ``ignore_index`` argument:
+
+For :class:`DataFrame` objects which don't have a meaningful index, the ``ignore_index``
+ignores overlapping indexes.
.. ipython:: python
result = pd.concat([df1, df4], ignore_index=True, sort=False)
+ result
.. ipython:: python
:suppress:
@@ -257,17 +181,18 @@ do this, use the ``ignore_index`` argument:
.. _merging.mixed_ndims:
-Concatenating with mixed ndims
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Concatenating :class:`Series` and :class:`DataFrame` together
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-You can concatenate a mix of ``Series`` and ``DataFrame`` objects. The
-``Series`` will be transformed to ``DataFrame`` with the column name as
-the name of the ``Series``.
+You can concatenate a mix of :class:`Series` and :class:`DataFrame` objects. The
+:class:`Series` will be transformed to :class:`DataFrame` with the column name as
+the name of the :class:`Series`.
.. ipython:: python
s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
result = pd.concat([df1, s1], axis=1)
+ result
.. ipython:: python
:suppress:
@@ -276,19 +201,13 @@ the name of the ``Series``.
p.plot([df1, s1], result, labels=["df1", "s1"], vertical=False);
plt.close("all");
-.. note::
-
- Since we're concatenating a ``Series`` to a ``DataFrame``, we could have
- achieved the same result with :meth:`DataFrame.assign`. To concatenate an
- arbitrary number of pandas objects (``DataFrame`` or ``Series``), use
- ``concat``.
-
-If unnamed ``Series`` are passed they will be numbered consecutively.
+Unnamed :class:`Series` will be numbered consecutively.
.. ipython:: python
s2 = pd.Series(["_0", "_1", "_2", "_3"])
result = pd.concat([df1, s2, s2, s2], axis=1)
+ result
.. ipython:: python
:suppress:
@@ -297,11 +216,12 @@ If unnamed ``Series`` are passed they will be numbered consecutively.
p.plot([df1, s2], result, labels=["df1", "s2"], vertical=False);
plt.close("all");
-Passing ``ignore_index=True`` will drop all name references.
+``ignore_index=True`` will drop all name references.
.. ipython:: python
result = pd.concat([df1, s1], axis=1, ignore_index=True)
+ result
.. ipython:: python
:suppress:
@@ -310,48 +230,45 @@ Passing ``ignore_index=True`` will drop all name references.
p.plot([df1, s1], result, labels=["df1", "s1"], vertical=False);
plt.close("all");
-More concatenating with group keys
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Resulting ``keys``
+~~~~~~~~~~~~~~~~~~
-A fairly common use of the ``keys`` argument is to override the column names
-when creating a new ``DataFrame`` based on existing ``Series``.
-Notice how the default behaviour consists on letting the resulting ``DataFrame``
-inherit the parent ``Series``' name, when these existed.
+The ``keys`` argument adds another axis level to the resulting index or column (creating
+a :class:`MultiIndex`) associate specific keys with each original :class:`DataFrame`.
.. ipython:: python
- s3 = pd.Series([0, 1, 2, 3], name="foo")
- s4 = pd.Series([0, 1, 2, 3])
- s5 = pd.Series([0, 1, 4, 5])
-
- pd.concat([s3, s4, s5], axis=1)
-
-Through the ``keys`` argument we can override the existing column names.
+ result = pd.concat(frames, keys=["x", "y", "z"])
+ result
+ result.loc["y"]
.. ipython:: python
+ :suppress:
- pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
+ @savefig merging_concat_keys.png
+ p.plot(frames, result, labels=["df1", "df2", "df3"], vertical=True)
+ plt.close("all");
-Let's consider a variation of the very first example presented:
+The ``keys`` argument cane override the column names
+when creating a new :class:`DataFrame` based on existing :class:`Series`.
.. ipython:: python
- result = pd.concat(frames, keys=["x", "y", "z"])
-
-.. ipython:: python
- :suppress:
+ s3 = pd.Series([0, 1, 2, 3], name="foo")
+ s4 = pd.Series([0, 1, 2, 3])
+ s5 = pd.Series([0, 1, 4, 5])
- @savefig merging_concat_group_keys2.png
- p.plot(frames, result, labels=["df1", "df2", "df3"], vertical=True);
- plt.close("all");
+ pd.concat([s3, s4, s5], axis=1)
+ pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
-You can also pass a dict to ``concat`` in which case the dict keys will be used
-for the ``keys`` argument (unless other keys are specified):
+You can also pass a dict to :func:`concat` in which case the dict keys will be used
+for the ``keys`` argument unless other ``keys`` argument is specified:
.. ipython:: python
pieces = {"x": df1, "y": df2, "z": df3}
result = pd.concat(pieces)
+ result
.. ipython:: python
:suppress:
@@ -363,6 +280,7 @@ for the ``keys`` argument (unless other keys are specified):
.. ipython:: python
result = pd.concat(pieces, keys=["z", "y"])
+ result
.. ipython:: python
:suppress:
@@ -371,21 +289,21 @@ for the ``keys`` argument (unless other keys are specified):
p.plot([df1, df2, df3], result, labels=["df1", "df2", "df3"], vertical=True);
plt.close("all");
-The MultiIndex created has levels that are constructed from the passed keys and
-the index of the ``DataFrame`` pieces:
+The :class:`MultiIndex` created has levels that are constructed from the passed keys and
+the index of the :class:`DataFrame` pieces:
.. ipython:: python
result.index.levels
-If you wish to specify other levels (as will occasionally be the case), you can
-do so using the ``levels`` argument:
+``levels`` argument allows specifying resulting levels associated with the ``keys``
.. ipython:: python
result = pd.concat(
pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
)
+ result
.. ipython:: python
:suppress:
@@ -398,21 +316,19 @@ do so using the ``levels`` argument:
result.index.levels
-This is fairly esoteric, but it is actually necessary for implementing things
-like GroupBy where the order of a categorical variable is meaningful.
-
.. _merging.append.row:
-Appending rows to a DataFrame
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Appending rows to a :class:`DataFrame`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-If you have a series that you want to append as a single row to a ``DataFrame``, you can convert the row into a
-``DataFrame`` and use ``concat``
+If you have a :class:`Series` that you want to append as a single row to a :class:`DataFrame`, you can convert the row into a
+:class:`DataFrame` and use :func:`concat`
.. ipython:: python
s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
+ result
.. ipython:: python
:suppress:
@@ -421,131 +337,35 @@ If you have a series that you want to append as a single row to a ``DataFrame``,
p.plot([df1, s2], result, labels=["df1", "s2"], vertical=True);
plt.close("all");
-You should use ``ignore_index`` with this method to instruct DataFrame to
-discard its index. If you wish to preserve the index, you should construct an
-appropriately-indexed DataFrame and append or concatenate those objects.
-
.. _merging.join:
-Database-style DataFrame or named Series joining/merging
---------------------------------------------------------
-
-pandas has full-featured, **high performance** in-memory join operations
-idiomatically very similar to relational databases like SQL. These methods
-perform significantly better (in some cases well over an order of magnitude
-better) than other open source implementations (like ``base::merge.data.frame``
-in R). The reason for this is careful algorithmic design and the internal layout
-of the data in ``DataFrame``.
-
-See the :ref:`cookbook<cookbook.merge>` for some advanced strategies.
+:func:`~pandas.merge`
+---------------------
-Users who are familiar with SQL but new to pandas might be interested in a
+:func:`~pandas.merge` performs join operations similar to relational databases like SQL.
+Users who are familiar with SQL but new to pandas can reference a
:ref:`comparison with SQL<compare_with_sql.join>`.
-pandas provides a single function, :func:`~pandas.merge`, as the entry point for
-all standard database join operations between ``DataFrame`` or named ``Series`` objects:
-
-::
-
- pd.merge(
- left,
- right,
- how="inner",
- on=None,
- left_on=None,
- right_on=None,
- left_index=False,
- right_index=False,
- sort=True,
- suffixes=("_x", "_y"),
- copy=True,
- indicator=False,
- validate=None,
- )
+Merge types
+~~~~~~~~~~~
+
+:func:`~pandas.merge` implements common SQL style joining operations.
-* ``left``: A DataFrame or named Series object.
-* ``right``: Another DataFrame or named Series object.
-* ``on``: Column or index level names to join on. Must be found in both the left
- and right DataFrame and/or Series objects. If not passed and ``left_index`` and
- ``right_index`` are ``False``, the intersection of the columns in the
- DataFrames and/or Series will be inferred to be the join keys.
-* ``left_on``: Columns or index levels from the left DataFrame or Series to use as
- keys. Can either be column names, index level names, or arrays with length
- equal to the length of the DataFrame or Series.
-* ``right_on``: Columns or index levels from the right DataFrame or Series to use as
- keys. Can either be column names, index level names, or arrays with length
- equal to the length of the DataFrame or Series.
-* ``left_index``: If ``True``, use the index (row labels) from the left
- DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
- (hierarchical), the number of levels must match the number of join keys
- from the right DataFrame or Series.
-* ``right_index``: Same usage as ``left_index`` for the right DataFrame or Series
-* ``how``: One of ``'left'``, ``'right'``, ``'outer'``, ``'inner'``, ``'cross'``. Defaults
- to ``inner``. See below for more detailed description of each method.
-* ``sort``: Sort the result DataFrame by the join keys in lexicographical
- order. Defaults to ``True``, setting to ``False`` will improve performance
- substantially in many cases.
-* ``suffixes``: A tuple of string suffixes to apply to overlapping
- columns. Defaults to ``('_x', '_y')``.
-* ``copy``: Always copy data (default ``True``) from the passed DataFrame or named Series
- objects, even when reindexing is not necessary. Cannot be avoided in many
- cases but may improve performance / memory usage. The cases where copying
- can be avoided are somewhat pathological but this option is provided
- nonetheless.
-* ``indicator``: Add a column to the output DataFrame called ``_merge``
- with information on the source of each row. ``_merge`` is Categorical-type
- and takes on a value of ``left_only`` for observations whose merge key
- only appears in ``'left'`` DataFrame or Series, ``right_only`` for observations whose
- merge key only appears in ``'right'`` DataFrame or Series, and ``both`` if the
- observation's merge key is found in both.
-
-* ``validate`` : string, default None.
- If specified, checks if merge is of specified type.
-
- * "one_to_one" or "1:1": checks if merge keys are unique in both
- left and right datasets.
- * "one_to_many" or "1:m": checks if merge keys are unique in left
- dataset.
- * "many_to_one" or "m:1": checks if merge keys are unique in right
- dataset.
- * "many_to_many" or "m:m": allowed, but does not result in checks.
-
-The return type will be the same as ``left``. If ``left`` is a ``DataFrame`` or named ``Series``
-and ``right`` is a subclass of ``DataFrame``, the return type will still be ``DataFrame``.
-
-``merge`` is a function in the pandas namespace, and it is also available as a
-``DataFrame`` instance method :meth:`~DataFrame.merge`, with the calling
-``DataFrame`` being implicitly considered the left object in the join.
-
-The related :meth:`~DataFrame.join` method, uses ``merge`` internally for the
-index-on-index (by default) and column(s)-on-index join. If you are joining on
-index only, you may wish to use ``DataFrame.join`` to save yourself some typing.
-
-Brief primer on merge methods (relational algebra)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Experienced users of relational databases like SQL will be familiar with the
-terminology used to describe join operations between two SQL-table like
-structures (``DataFrame`` objects). There are several cases to consider which
-are very important to understand:
-
-* **one-to-one** joins: for example when joining two ``DataFrame`` objects on
- their indexes (which must contain unique values).
-* **many-to-one** joins: for example when joining an index (unique) to one or
- more columns in a different ``DataFrame``.
-* **many-to-many** joins: joining columns on columns.
+* **one-to-one**: joining two :class:`DataFrame` objects on
+ their indexes which must contain unique values.
+* **many-to-one**: joining a unique index to one or
+ more columns in a different :class:`DataFrame`.
+* **many-to-many** : joining columns on columns.
.. note::
- When joining columns on columns (potentially a many-to-many join), any
- indexes on the passed ``DataFrame`` objects **will be discarded**.
+ When joining columns on columns, potentially a many-to-many join, any
+ indexes on the passed :class:`DataFrame` objects **will be discarded**.
-It is worth spending some time understanding the result of the **many-to-many**
-join case. In SQL / standard relational algebra, if a key combination appears
-more than once in both tables, the resulting table will have the **Cartesian
-product** of the associated data. Here is a very basic example with one unique
-key combination:
+For a **many-to-many** join, if a key combination appears
+more than once in both tables, the :class:`DataFrame` will have the **Cartesian
+product** of the associated data.
.. ipython:: python
@@ -565,6 +385,7 @@ key combination:
}
)
result = pd.merge(left, right, on="key")
+ result
.. ipython:: python
:suppress:
@@ -573,41 +394,8 @@ key combination:
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-Here is a more complicated example with multiple join keys. Only the keys
-appearing in ``left`` and ``right`` are present (the intersection), since
-``how='inner'`` by default.
-
-.. ipython:: python
-
- left = pd.DataFrame(
- {
- "key1": ["K0", "K0", "K1", "K2"],
- "key2": ["K0", "K1", "K0", "K1"],
- "A": ["A0", "A1", "A2", "A3"],
- "B": ["B0", "B1", "B2", "B3"],
- }
- )
-
- right = pd.DataFrame(
- {
- "key1": ["K0", "K1", "K1", "K2"],
- "key2": ["K0", "K0", "K0", "K0"],
- "C": ["C0", "C1", "C2", "C3"],
- "D": ["D0", "D1", "D2", "D3"],
- }
- )
-
- result = pd.merge(left, right, on=["key1", "key2"])
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_on_key_multiple.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
-The ``how`` argument to ``merge`` specifies how to determine which keys are to
-be included in the resulting table. If a key combination **does not appear** in
+The ``how`` argument to :func:`~pandas.merge` specifies which keys are
+included in the resulting table. If a key combination **does not appear** in
either the left or right tables, the values in the joined table will be
``NA``. Here is a summary of the ``how`` options and their SQL equivalent names:
@@ -623,7 +411,24 @@ either the left or right tables, the values in the joined table will be
.. ipython:: python
+ left = pd.DataFrame(
+ {
+ "key1": ["K0", "K0", "K1", "K2"],
+ "key2": ["K0", "K1", "K0", "K1"],
+ "A": ["A0", "A1", "A2", "A3"],
+ "B": ["B0", "B1", "B2", "B3"],
+ }
+ )
+ right = pd.DataFrame(
+ {
+ "key1": ["K0", "K1", "K1", "K2"],
+ "key2": ["K0", "K0", "K0", "K0"],
+ "C": ["C0", "C1", "C2", "C3"],
+ "D": ["D0", "D1", "D2", "D3"],
+ }
+ )
result = pd.merge(left, right, how="left", on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -635,6 +440,7 @@ either the left or right tables, the values in the joined table will be
.. ipython:: python
result = pd.merge(left, right, how="right", on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -645,6 +451,7 @@ either the left or right tables, the values in the joined table will be
.. ipython:: python
result = pd.merge(left, right, how="outer", on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -656,6 +463,7 @@ either the left or right tables, the values in the joined table will be
.. ipython:: python
result = pd.merge(left, right, how="inner", on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -667,6 +475,7 @@ either the left or right tables, the values in the joined table will be
.. ipython:: python
result = pd.merge(left, right, how="cross")
+ result
.. ipython:: python
:suppress:
@@ -675,10 +484,9 @@ either the left or right tables, the values in the joined table will be
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-You can merge a mult-indexed Series and a DataFrame, if the names of
-the MultiIndex correspond to the columns from the DataFrame. Transform
-the Series to a DataFrame using :meth:`Series.reset_index` before merging,
-as shown in the following example.
+You can :class:`Series` and a :class:`DataFrame` with a :class:`MultiIndex` if the names of
+the :class:`MultiIndex` correspond to the columns from the :class:`DataFrame`. Transform
+the :class:`Series` to a :class:`DataFrame` using :meth:`Series.reset_index` before merging
.. ipython:: python
@@ -696,7 +504,7 @@ as shown in the following example.
pd.merge(df, ser.reset_index(), on=["Let", "Num"])
-Here is another example with duplicate join keys in DataFrames:
+Performing an outer join with duplicate join keys in :class:`DataFrame`
.. ipython:: python
@@ -705,6 +513,7 @@ Here is another example with duplicate join keys in DataFrames:
right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
result = pd.merge(left, right, on="B", how="outer")
+ result
.. ipython:: python
:suppress:
@@ -716,21 +525,17 @@ Here is another example with duplicate join keys in DataFrames:
.. warning::
- Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user' s responsibility to manage duplicate values in keys before joining large DataFrames.
+ Merging on duplicate keys sigificantly increase the dimensions of the result
+ and can cause a memory overflow.
.. _merging.validation:
-Checking for duplicate keys
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Users can use the ``validate`` argument to automatically check whether there
-are unexpected duplicates in their merge keys. Key uniqueness is checked before
-merge operations and so should protect against memory overflows. Checking key
-uniqueness is also a good way to ensure user data structures are as expected.
+Merge key uniqueness
+~~~~~~~~~~~~~~~~~~~~
-In the following example, there are duplicate values of ``B`` in the right
-``DataFrame``. As this is not a one-to-one merge -- as specified in the
-``validate`` argument -- an exception will be raised.
+The ``validate`` argument checks whether the uniqueness of merge keys.
+Key uniqueness is checked before merge operations and can protect against memory overflows
+and unexpected key duplication.
.. ipython:: python
:okexcept:
@@ -739,8 +544,8 @@ In the following example, there are duplicate values of ``B`` in the right
right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
-If the user is aware of the duplicates in the right ``DataFrame`` but wants to
-ensure there are no duplicates in the left DataFrame, one can use the
+If the user is aware of the duplicates in the right :class:`DataFrame` but wants to
+ensure there are no duplicates in the left :class:`DataFrame`, one can use the
``validate='one_to_many'`` argument instead, which will not raise an exception.
.. ipython:: python
@@ -750,8 +555,8 @@ ensure there are no duplicates in the left DataFrame, one can use the
.. _merging.indicator:
-The merge indicator
-~~~~~~~~~~~~~~~~~~~
+Merge result indicator
+~~~~~~~~~~~~~~~~~~~~~~
:func:`~pandas.merge` accepts the argument ``indicator``. If ``True``, a
Categorical-type column called ``_merge`` will be added to the output object
@@ -771,97 +576,53 @@ that takes on values:
df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
pd.merge(df1, df2, on="col1", how="outer", indicator=True)
-The ``indicator`` argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
+A string argument to ``indicator`` will use the value as the name for the indicator column.
.. ipython:: python
pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
-.. _merging.dtypes:
-
-Merge dtypes
-~~~~~~~~~~~~
-
-Merging will preserve the dtype of the join keys.
-
-.. ipython:: python
-
- left = pd.DataFrame({"key": [1], "v1": [10]})
- left
- right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
- right
-
-We are able to preserve the join keys:
-
-.. ipython:: python
-
- pd.merge(left, right, how="outer")
- pd.merge(left, right, how="outer").dtypes
-
-Of course if you have missing values that are introduced, then the
-resulting dtype will be upcast.
-
-.. ipython:: python
-
- pd.merge(left, right, how="outer", on="key")
- pd.merge(left, right, how="outer", on="key").dtypes
-
-Merging will preserve ``category`` dtypes of the mergands. See also the section on :ref:`categoricals <categorical.merge>`.
+Overlapping value columns
+~~~~~~~~~~~~~~~~~~~~~~~~~
-The left frame.
+The merge ``suffixes`` argument takes a tuple of list of strings to append to
+overlapping column names in the input :class:`DataFrame` to disambiguate the result
+columns:
.. ipython:: python
- from pandas.api.types import CategoricalDtype
-
- X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
- X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
-
- left = pd.DataFrame(
- {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
- )
- left
- left.dtypes
+ left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
+ right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
-The right frame.
+ result = pd.merge(left, right, on="k")
+ result
.. ipython:: python
+ :suppress:
- right = pd.DataFrame(
- {
- "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
- "Z": [1, 2],
- }
- )
- right
- right.dtypes
-
-The merged result:
+ @savefig merging_merge_overlapped.png
+ p.plot([left, right], result, labels=["left", "right"], vertical=False);
+ plt.close("all");
.. ipython:: python
- result = pd.merge(left, right, how="outer")
+ result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
result
- result.dtypes
-
-.. note::
- The category dtypes must be *exactly* the same, meaning the same categories and the ordered attribute.
- Otherwise the result will coerce to the categories' dtype.
-
-.. note::
-
- Merging on ``category`` dtypes that are the same can be quite performant compared to ``object`` dtype merging.
+.. ipython:: python
+ :suppress:
-.. _merging.join.index:
+ @savefig merging_merge_overlapped_suffix.png
+ p.plot([left, right], result, labels=["left", "right"], vertical=False);
+ plt.close("all");
-Joining on index
-~~~~~~~~~~~~~~~~
+:meth:`DataFrame.join`
+----------------------
-:meth:`DataFrame.join` is a convenient method for combining the columns of two
-potentially differently-indexed ``DataFrames`` into a single result
-``DataFrame``. Here is a very basic example:
+:meth:`DataFrame.join` combines the columns of multiple,
+potentially differently-indexed :class:`DataFrame` into a single result
+:class:`DataFrame`.
.. ipython:: python
@@ -874,6 +635,7 @@ potentially differently-indexed ``DataFrames`` into a single result
)
result = left.join(right)
+ result
.. ipython:: python
:suppress:
@@ -885,6 +647,7 @@ potentially differently-indexed ``DataFrames`` into a single result
.. ipython:: python
result = left.join(right, how="outer")
+ result
.. ipython:: python
:suppress:
@@ -893,11 +656,10 @@ potentially differently-indexed ``DataFrames`` into a single result
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-The same as above, but with ``how='inner'``.
-
.. ipython:: python
result = left.join(right, how="inner")
+ result
.. ipython:: python
:suppress:
@@ -906,50 +668,9 @@ The same as above, but with ``how='inner'``.
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-The data alignment here is on the indexes (row labels). This same behavior can
-be achieved using ``merge`` plus additional arguments instructing it to use the
-indexes:
-
-.. ipython:: python
-
- result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_index_outer.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
-.. ipython:: python
-
- result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_index_inner.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
-Joining key columns on an index
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-:meth:`~DataFrame.join` takes an optional ``on`` argument which may be a column
-or multiple column names, which specifies that the passed ``DataFrame`` is to be
-aligned on that column in the ``DataFrame``. These two function calls are
-completely equivalent:
-
-::
-
- left.join(right, on=key_or_keys)
- pd.merge(
- left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
- )
-
-Obviously you can choose whichever form you find more convenient. For
-many-to-one joins (where one of the ``DataFrame``'s is already indexed by the
-join key), using ``join`` may be more convenient. Here is a simple example:
+:meth:`DataFrame.join` takes an optional ``on`` argument which may be a column
+or multiple column names that the passed :class:`DataFrame` is to be
+aligned.
.. ipython:: python
@@ -964,6 +685,7 @@ join key), using ``join`` may be more convenient. Here is a simple example:
right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
result = left.join(right, on="key")
+ result
.. ipython:: python
:suppress:
@@ -977,6 +699,7 @@ join key), using ``join`` may be more convenient. Here is a simple example:
result = pd.merge(
left, right, left_on="key", right_index=True, how="left", sort=False
)
+ result
.. ipython:: python
:suppress:
@@ -987,7 +710,7 @@ join key), using ``join`` may be more convenient. Here is a simple example:
.. _merging.multikey_join:
-To join on multiple keys, the passed DataFrame must have a ``MultiIndex``:
+To join on multiple keys, the passed :class:`DataFrame` must have a :class:`MultiIndex`:
.. ipython:: python
@@ -1006,12 +729,8 @@ To join on multiple keys, the passed DataFrame must have a ``MultiIndex``:
right = pd.DataFrame(
{"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
)
-
-Now this can be joined by passing the two key column names:
-
-.. ipython:: python
-
result = left.join(right, on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -1022,14 +741,14 @@ Now this can be joined by passing the two key column names:
.. _merging.df_inner_join:
-The default for ``DataFrame.join`` is to perform a left join (essentially a
-"VLOOKUP" operation, for Excel users), which uses only the keys found in the
-calling DataFrame. Other join types, for example inner join, can be just as
-easily performed:
+The default for :class:`DataFrame.join` is to perform a left join
+which uses only the keys found in the
+calling :class:`DataFrame`. Other join types can be specified with ``how``.
.. ipython:: python
result = left.join(right, on=["key1", "key2"], how="inner")
+ result
.. ipython:: python
:suppress:
@@ -1038,16 +757,13 @@ easily performed:
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-As you can see, this drops any rows where there was no match.
-
.. _merging.join_on_mi:
Joining a single Index to a MultiIndex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-You can join a singly-indexed ``DataFrame`` with a level of a MultiIndexed ``DataFrame``.
-The level will match on the name of the index of the singly-indexed frame against
-a level name of the MultiIndexed frame.
+You can join a :class:`DataFrame` with a :class:`Index` to a :class:`DataFrame` with a :class:`MultiIndex` on a level.
+The ``name`` of the :class:`Index` with match the level name of the :class:`MultiIndex`.
.. ipython:: python
@@ -1066,6 +782,7 @@ a level name of the MultiIndexed frame.
)
result = left.join(right, how="inner")
+ result
.. ipython:: python
@@ -1075,29 +792,13 @@ a level name of the MultiIndexed frame.
p.plot([left, right], result, labels=["left", "right"], vertical=False);
plt.close("all");
-This is equivalent but less verbose and more memory efficient / faster than this.
-
-.. ipython:: python
-
- result = pd.merge(
- left.reset_index(), right.reset_index(), on=["key"], how="inner"
- ).set_index(["key","Y"])
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_multiindex_alternative.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
.. _merging.join_with_two_multi_indexes:
-Joining with two MultiIndexes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Joining with two :class:`MultiIndex`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This is supported in a limited way, provided that the index for the right
-argument is completely used in the join, and is a subset of the indices in
-the left argument, as in this example:
+The :class:`MultiIndex` of the input argument must be completely used
+in the join and is a subset of the indices in the left argument.
.. ipython:: python
@@ -1115,9 +816,6 @@ the left argument, as in this example:
left.join(right, on=["abc", "xy"], how="inner")
-If that condition is not satisfied, a join with two multi-indexes can be
-done using the following code.
-
.. ipython:: python
leftindex = pd.MultiIndex.from_tuples(
@@ -1137,6 +835,7 @@ done using the following code.
result = pd.merge(
left.reset_index(), right.reset_index(), on=["key"], how="inner"
).set_index(["key", "X", "Y"])
+ result
.. ipython:: python
:suppress:
@@ -1152,7 +851,7 @@ Merging on a combination of columns and index levels
Strings passed as the ``on``, ``left_on``, and ``right_on`` parameters
may refer to either column names or index level names. This enables merging
-``DataFrame`` instances on a combination of index levels and columns without
+:class:`DataFrame` instances on a combination of index levels and columns without
resetting indexes.
.. ipython:: python
@@ -1180,6 +879,7 @@ resetting indexes.
)
result = left.merge(right, on=["key1", "key2"])
+ result
.. ipython:: python
:suppress:
@@ -1190,76 +890,23 @@ resetting indexes.
.. note::
- When DataFrames are merged on a string that matches an index level in both
- frames, the index level is preserved as an index level in the resulting
- DataFrame.
-
-.. note::
- When DataFrames are merged using only some of the levels of a ``MultiIndex``,
- the extra levels will be dropped from the resulting merge. In order to
- preserve those levels, use ``reset_index`` on those level names to move
- those levels to columns prior to doing the merge.
+ When :class:`DataFrame` are joined on a string that matches an index level in both
+ arguments, the index level is preserved as an index level in the resulting
+ :class:`DataFrame`.
.. note::
- If a string matches both a column name and an index level name, then a
- warning is issued and the column takes precedence. This will result in an
- ambiguity error in a future version.
-
-Overlapping value columns
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The merge ``suffixes`` argument takes a tuple of list of strings to append to
-overlapping column names in the input ``DataFrame``\ s to disambiguate the result
-columns:
-
-.. ipython:: python
-
- left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
- right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
-
- result = pd.merge(left, right, on="k")
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_overlapped.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
-.. ipython:: python
-
- result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_overlapped_suffix.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
-
-:meth:`DataFrame.join` has ``lsuffix`` and ``rsuffix`` arguments which behave
-similarly.
-
-.. ipython:: python
-
- left = left.set_index("k")
- right = right.set_index("k")
- result = left.join(right, lsuffix="_l", rsuffix="_r")
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_merge_overlapped_multi_suffix.png
- p.plot([left, right], result, labels=["left", "right"], vertical=False);
- plt.close("all");
+ When :class:`DataFrame` are joined using only some of the levels of a :class:`MultiIndex`,
+ the extra levels will be dropped from the resulting join. To
+ preserve those levels, use :meth:`DataFrame.reset_index` on those level
+ names to move those levels to columns prior to the join.
.. _merging.multiple_join:
-Joining multiple DataFrames
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Joining multiple :class:`DataFrame`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-A list or tuple of ``DataFrames`` can also be passed to :meth:`~DataFrame.join`
+A list or tuple of ``:class:`DataFrame``` can also be passed to :meth:`~DataFrame.join`
to join them together on their indexes.
.. ipython:: python
@@ -1281,12 +928,12 @@ to join them together on their indexes.
.. _merging.combine_first.update:
-Merging together values within Series or DataFrame columns
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+:meth:`DataFrame.combine_first`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Another fairly common situation is to have two like-indexed (or similarly
-indexed) ``Series`` or ``DataFrame`` objects and wanting to "patch" values in
-one object from values for matching indices in the other. Here is an example:
+:meth:`DataFrame.combine_first` update missing values from one :class:`DataFrame`
+with the non-missing values in another :class:`DataFrame` in the corresponding
+location.
.. ipython:: python
@@ -1294,12 +941,8 @@ one object from values for matching indices in the other. Here is an example:
[[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
)
df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
-
-For this, use the :meth:`~DataFrame.combine_first` method:
-
-.. ipython:: python
-
result = df1.combine_first(df2)
+ result
.. ipython:: python
:suppress:
@@ -1308,39 +951,13 @@ For this, use the :meth:`~DataFrame.combine_first` method:
p.plot([df1, df2], result, labels=["df1", "df2"], vertical=False);
plt.close("all");
-Note that this method only takes values from the right ``DataFrame`` if they are
-missing in the left ``DataFrame``. A related method, :meth:`~DataFrame.update`,
-alters non-NA values in place:
-
-.. ipython:: python
- :suppress:
-
- df1_copy = df1.copy()
-
-.. ipython:: python
-
- df1.update(df2)
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_update.png
- p.plot([df1_copy, df2], df1, labels=["df1", "df2"], vertical=False);
- plt.close("all");
-
-.. _merging.time_series:
-
-Timeseries friendly merging
----------------------------
-
.. _merging.merge_ordered:
-Merging ordered data
-~~~~~~~~~~~~~~~~~~~~
+:func:`merge_ordered`
+---------------------
-A :func:`merge_ordered` function allows combining time series and other
-ordered data. In particular it has an optional ``fill_method`` keyword to
-fill/interpolate missing data:
+:func:`merge_ordered` combines order data such as numeric or time series data
+with optional filling of missing data with ``fill_method``.
.. ipython:: python
@@ -1354,19 +971,16 @@ fill/interpolate missing data:
.. _merging.merge_asof:
-Merging asof
-~~~~~~~~~~~~
-
-A :func:`merge_asof` is similar to an ordered left-join except that we match on
-nearest key rather than equal keys. For each row in the ``left`` ``DataFrame``,
-we select the last row in the ``right`` ``DataFrame`` whose ``on`` key is less
-than the left's key. Both DataFrames must be sorted by the key.
+:func:`merge_asof`
+---------------------
-Optionally an asof merge can perform a group-wise merge. This matches the
-``by`` key equally, in addition to the nearest match on the ``on`` key.
+:func:`merge_asof` is similar to an ordered left-join except that mactches are on the
+nearest key rather than equal keys. For each row in the ``left`` :class:`DataFrame`,
+the last row in the ``right`` :class:`DataFrame` are selected where the ``on`` key is less
+than the left's key. Both :class:`DataFrame` must be sorted by the key.
-For example; we might have ``trades`` and ``quotes`` and we want to ``asof``
-merge them.
+Optionally an :func:`merge_asof` can perform a group-wise merge by matching the
+``by`` key in addition to the nearest match on the ``on`` key.
.. ipython:: python
@@ -1408,25 +1022,17 @@ merge them.
},
columns=["time", "ticker", "bid", "ask"],
)
-
-.. ipython:: python
-
trades
quotes
-
-By default we are taking the asof of the quotes.
-
-.. ipython:: python
-
pd.merge_asof(trades, quotes, on="time", by="ticker")
-We only asof within ``2ms`` between the quote time and the trade time.
+:func:`merge_asof` within ``2ms`` between the quote time and the trade time.
.. ipython:: python
pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
-We only asof within ``10ms`` between the quote time and the trade time and we
+:func:`merge_asof` within ``10ms`` between the quote time and the trade time and
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes **do** propagate to that point in time.
@@ -1443,14 +1049,11 @@ exclude exact matches on time. Note that though we exclude the exact matches
.. _merging.compare:
-Comparing objects
------------------
-
-The :meth:`~Series.compare` and :meth:`~DataFrame.compare` methods allow you to
-compare two DataFrame or Series, respectively, and summarize their differences.
+:meth:`~Series.compare`
+-----------------------
-For example, you might want to compare two ``DataFrame`` and stack their differences
-side by side.
+The :meth:`Series.compare` and :meth:`DataFrame.compare` methods allow you to
+compare two :class:`DataFrame` or :class:`Series`, respectively, and summarize their differences.
.. ipython:: python
@@ -1463,36 +1066,29 @@ side by side.
columns=["col1", "col2", "col3"],
)
df
-
-.. ipython:: python
-
df2 = df.copy()
df2.loc[0, "col1"] = "c"
df2.loc[2, "col3"] = 4.0
df2
-
-.. ipython:: python
-
df.compare(df2)
By default, if two corresponding values are equal, they will be shown as ``NaN``.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
-If you wish, you may choose to stack the differences on rows.
+Stack the differences on rows.
.. ipython:: python
df.compare(df2, align_axis=0)
-If you wish to keep all original rows and columns, set ``keep_shape`` argument
-to ``True``.
+Keep all original rows and columns with ``keep_shape=True``
.. ipython:: python
df.compare(df2, keep_shape=True)
-You may also keep all the original values even if they are equal.
+Keep all the original values even if they are equal.
.. ipython:: python
| * Added more sphinx references
* Added summary section in the beginning of method overviews
* Ensured merging data is printed in ipython and not just plotted
* Removed sections that were identical to the docstrings | https://api.github.com/repos/pandas-dev/pandas/pulls/54606 | 2023-08-17T21:15:20Z | 2023-08-21T19:21:48Z | 2023-08-21T19:21:48Z | 2023-08-21T19:21:51Z |
DEPR: deprecated nonkeyword arguments in to_markdown | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 4ad450c965464..1ccbd292a5441 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -94,6 +94,7 @@ Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_markdown` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1e10e8f11a575..c2a3d9285386e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2800,6 +2800,9 @@ def to_feather(self, path: FilePath | WriteBuffer[bytes], **kwargs) -> None:
to_feather(self, path, **kwargs)
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "buf"], name="to_markdown"
+ )
@doc(
Series.to_markdown,
klass=_shared_doc_kwargs["klass"],
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5c80e743d67b4..564c799d7ab66 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1871,7 +1871,7 @@ def to_markdown(
{examples}
"""
return self.to_frame().to_markdown(
- buf, mode, index, storage_options=storage_options, **kwargs
+ buf, mode=mode, index=index, storage_options=storage_options, **kwargs
)
# ----------------------------------------------------------------------
diff --git a/pandas/tests/io/formats/test_to_markdown.py b/pandas/tests/io/formats/test_to_markdown.py
index 437f079c5f2f9..85eca834ff0d4 100644
--- a/pandas/tests/io/formats/test_to_markdown.py
+++ b/pandas/tests/io/formats/test_to_markdown.py
@@ -1,8 +1,12 @@
-from io import StringIO
+from io import (
+ BytesIO,
+ StringIO,
+)
import pytest
import pandas as pd
+import pandas._testing as tm
pytest.importorskip("tabulate")
@@ -88,3 +92,15 @@ def test_showindex_disallowed_in_kwargs():
df = pd.DataFrame([1, 2, 3])
with pytest.raises(ValueError, match="Pass 'index' instead of 'showindex"):
df.to_markdown(index=True, showindex=True)
+
+
+def test_markdown_pos_args_deprecatation():
+ # GH-54229
+ df = pd.DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_markdown except for the "
+ r"argument 'buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ buffer = BytesIO()
+ df.to_markdown(buffer, "grid")
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54605 | 2023-08-17T20:50:37Z | 2023-08-18T16:58:21Z | 2023-08-18T16:58:21Z | 2023-08-18T19:18:51Z |
Backport PR #54574 on branch 2.1.x (ENH: add cummax/cummin/cumprod support for arrow dtypes) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index a8004dfd506b0..43a64a79e691b 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -265,6 +265,7 @@ Other enhancements
- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to ``lzma.LZMAFile`` (:issue:`52979`)
- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
+- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 0f46e5a4e7482..3c65e6b4879e2 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1389,6 +1389,9 @@ def _accumulate(
NotImplementedError : subclass does not define accumulations
"""
pyarrow_name = {
+ "cummax": "cumulative_max",
+ "cummin": "cumulative_min",
+ "cumprod": "cumulative_prod_checked",
"cumsum": "cumulative_sum_checked",
}.get(name, name)
pyarrow_meth = getattr(pc, pyarrow_name, None)
@@ -1398,12 +1401,20 @@ def _accumulate(
data_to_accum = self._pa_array
pa_dtype = data_to_accum.type
- if pa.types.is_duration(pa_dtype):
- data_to_accum = data_to_accum.cast(pa.int64())
+
+ convert_to_int = (
+ pa.types.is_temporal(pa_dtype) and name in ["cummax", "cummin"]
+ ) or (pa.types.is_duration(pa_dtype) and name == "cumsum")
+
+ if convert_to_int:
+ if pa_dtype.bit_width == 32:
+ data_to_accum = data_to_accum.cast(pa.int32())
+ else:
+ data_to_accum = data_to_accum.cast(pa.int64())
result = pyarrow_meth(data_to_accum, skip_nulls=skipna, **kwargs)
- if pa.types.is_duration(pa_dtype):
+ if convert_to_int:
result = result.cast(pa_dtype)
return type(self)(result)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index dd1ff925adf5f..e748f320b3f09 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -347,10 +347,15 @@ class TestBaseAccumulateTests(base.BaseAccumulateTests):
def check_accumulate(self, ser, op_name, skipna):
result = getattr(ser, op_name)(skipna=skipna)
- if ser.dtype.kind == "m":
+ pa_type = ser.dtype.pyarrow_dtype
+ if pa.types.is_temporal(pa_type):
# Just check that we match the integer behavior.
- ser = ser.astype("int64[pyarrow]")
- result = result.astype("int64[pyarrow]")
+ if pa_type.bit_width == 32:
+ int_type = "int32[pyarrow]"
+ else:
+ int_type = "int64[pyarrow]"
+ ser = ser.astype(int_type)
+ result = result.astype(int_type)
result = result.astype("Float64")
expected = getattr(ser.astype("Float64"), op_name)(skipna=skipna)
@@ -361,14 +366,20 @@ def _supports_accumulation(self, ser: pd.Series, op_name: str) -> bool:
# attribute "pyarrow_dtype"
pa_type = ser.dtype.pyarrow_dtype # type: ignore[union-attr]
- if pa.types.is_string(pa_type) or pa.types.is_binary(pa_type):
- if op_name in ["cumsum", "cumprod"]:
+ if (
+ pa.types.is_string(pa_type)
+ or pa.types.is_binary(pa_type)
+ or pa.types.is_decimal(pa_type)
+ ):
+ if op_name in ["cumsum", "cumprod", "cummax", "cummin"]:
return False
- elif pa.types.is_temporal(pa_type) and not pa.types.is_duration(pa_type):
- if op_name in ["cumsum", "cumprod"]:
+ elif pa.types.is_boolean(pa_type):
+ if op_name in ["cumprod", "cummax", "cummin"]:
return False
- elif pa.types.is_duration(pa_type):
- if op_name == "cumprod":
+ elif pa.types.is_temporal(pa_type):
+ if op_name == "cumsum" and not pa.types.is_duration(pa_type):
+ return False
+ elif op_name == "cumprod":
return False
return True
@@ -384,7 +395,9 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna, reques
data, all_numeric_accumulations, skipna
)
- if all_numeric_accumulations != "cumsum" or pa_version_under9p0:
+ if pa_version_under9p0 or (
+ pa_version_under13p0 and all_numeric_accumulations != "cumsum"
+ ):
# xfailing takes a long time to run because pytest
# renders the exception messages even when not showing them
opt = request.config.option
| Backport PR #54574: ENH: add cummax/cummin/cumprod support for arrow dtypes | https://api.github.com/repos/pandas-dev/pandas/pulls/54603 | 2023-08-17T15:54:22Z | 2023-08-17T17:16:58Z | 2023-08-17T17:16:58Z | 2023-08-17T17:16:59Z |
DEPR: deprecated nonkeyword arguments in to_stata | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 797b2f4ddb45e..bd286e48dc203 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -64,6 +64,7 @@
from pandas.util._decorators import (
Appender,
Substitution,
+ deprecate_nonkeyword_arguments,
doc,
)
from pandas.util._exceptions import find_stack_level
@@ -2614,6 +2615,9 @@ def _from_arrays(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
)
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "path"], name="to_stata"
+ )
def to_stata(
self,
path: FilePath | WriteBuffer[bytes],
| - [ ] xref #54229
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54602 | 2023-08-17T12:11:24Z | 2023-08-17T20:58:25Z | null | 2023-08-17T21:11:06Z |
DEPR: Nonkeyword arguments in to_latex | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d520aacf3e85c..af224b277d87d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -93,6 +93,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
-
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 56a60f5d1a38c..c28ae86985896 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3306,6 +3306,9 @@ def to_latex(
...
@final
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "buf"], name="to_latex"
+ )
def to_latex(
self,
buf: FilePath | WriteBuffer[str] | None = None,
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index d715daf253cd3..1fd96dff27d06 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -187,6 +187,22 @@ def test_to_latex_midrule_location(self):
)
assert result == expected
+ def test_to_latex_pos_args_deprecation(self):
+ # GH-54229
+ df = DataFrame(
+ {
+ "name": ["Raphael", "Donatello"],
+ "age": [26, 45],
+ "height": [181.23, 177.65],
+ }
+ )
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_latex except for "
+ r"the argument 'buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_latex(None, None)
+
class TestToLatexLongtable:
def test_to_latex_empty_longtable(self):
| - [x] xref #54229 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54601 | 2023-08-17T12:10:45Z | 2023-08-17T23:22:01Z | 2023-08-17T23:22:01Z | 2023-08-17T23:22:08Z |
DEPR: Nonkeyword arguments in to_pickle | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c35473b852eb9..9295ad6cb9aa6 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,7 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
--
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7624c8f7c7930..8b1540efcef54 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3017,6 +3017,9 @@ def to_sql(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "path"], name="to_pickle"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 75e4de7074e63..a30b3f64bd75c 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -585,3 +585,15 @@ def test_pickle_frame_v124_unpickle_130(datapath):
expected = pd.DataFrame(index=[], columns=[])
tm.assert_frame_equal(df, expected)
+
+
+def test_pickle_pos_args_deprecation():
+ # GH-54229
+ df = pd.DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_pickle except for the "
+ r"argument 'path' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ buffer = io.BytesIO()
+ df.to_pickle(buffer, "infer")
| - [x] xref #54229
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54599 | 2023-08-17T10:44:09Z | 2023-08-17T18:26:43Z | 2023-08-17T18:26:42Z | 2023-08-17T20:05:34Z |
DEPR: Nonkeyword arguments in to_pickle | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c35473b852eb9..9295ad6cb9aa6 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,7 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
--
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7624c8f7c7930..88fa32ca5b20b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3017,6 +3017,9 @@ def to_sql(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "path"], name="to_pickle"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 87beab04bc586..7b5b31037d445 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -460,3 +460,13 @@ def test_bool_dep(self) -> None:
)
with tm.assert_produces_warning(FutureWarning, match=msg_warn):
DataFrame({"col": [False]}).bool()
+
+ def test_drop_pos_args_deprecation_for_to_pickle(self):
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"In a future version of pandas all arguments of to_pickle "
+ r"except for the argument 'path' will be keyword-only"
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_pickle("./dummy.pkl", "infer")
| - [ ] closes #54229
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54598 | 2023-08-17T10:37:38Z | 2023-08-17T10:38:52Z | null | 2023-08-17T10:38:55Z |
DEPR: deprecated nonkeyword arguments for to_string | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d520aacf3e85c..7019eb8da964e 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -94,6 +94,7 @@ Deprecations
~~~~~~~~~~~~
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 797b2f4ddb45e..1e10e8f11a575 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -64,6 +64,7 @@
from pandas.util._decorators import (
Appender,
Substitution,
+ deprecate_nonkeyword_arguments,
doc,
)
from pandas.util._exceptions import find_stack_level
@@ -1229,6 +1230,9 @@ def to_string(
) -> None:
...
+ @deprecate_nonkeyword_arguments(
+ version="3.0", allowed_args=["self", "buf"], name="to_string"
+ )
@Substitution(
header_type="bool or list of str",
header="Write out the column names. If a list of columns "
diff --git a/pandas/tests/io/formats/test_to_string.py b/pandas/tests/io/formats/test_to_string.py
index 0c260f0af0a8d..45f3b2201a599 100644
--- a/pandas/tests/io/formats/test_to_string.py
+++ b/pandas/tests/io/formats/test_to_string.py
@@ -11,6 +11,7 @@
option_context,
to_datetime,
)
+import pandas._testing as tm
def test_repr_embedded_ndarray():
@@ -355,3 +356,15 @@ def test_to_string_string_dtype():
z int64[pyarrow]"""
)
assert result == expected
+
+
+def test_to_string_pos_args_deprecation():
+ # GH-54229
+ df = DataFrame({"a": [1, 2, 3]})
+ msg = (
+ r"Starting with pandas version 3.0 all arguments of to_string except for the "
+ r"argument 'buf' will be keyword-only."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ buf = StringIO()
+ df.to_string(buf, None, None, True, True)
| - [x] xref #54229 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54597 | 2023-08-17T10:11:34Z | 2023-08-17T23:23:02Z | 2023-08-17T23:23:02Z | 2023-08-17T23:23:09Z |
DEPR: non keyword arguments in to_latex | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7624c8f7c7930..22b02359294dc 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3017,6 +3017,9 @@ def to_sql(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "path"], name="to_pickle"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
@@ -3300,6 +3303,9 @@ def to_latex(
...
@final
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "path"], name="to_latex"
+ )
def to_latex(
self,
buf: FilePath | WriteBuffer[str] | None = None,
| - [ ] xref #54229
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54596 | 2023-08-17T09:15:21Z | 2023-08-17T10:03:28Z | null | 2023-08-17T10:11:49Z |
DEPR: Non keyword arguments in to_string | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c35473b852eb9..92c3f4aa45861 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,7 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
--
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 797b2f4ddb45e..211c0b65046a2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -64,6 +64,7 @@
from pandas.util._decorators import (
Appender,
Substitution,
+ deprecate_nonkeyword_arguments,
doc,
)
from pandas.util._exceptions import find_stack_level
@@ -1229,6 +1230,9 @@ def to_string(
) -> None:
...
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "buf"], name="to_string"
+ )
@Substitution(
header_type="bool or list of str",
header="Write out the column names. If a list of columns "
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7624c8f7c7930..88fa32ca5b20b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3017,6 +3017,9 @@ def to_sql(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "path"], name="to_pickle"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
| - [ ] xref #54229
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54595 | 2023-08-17T08:46:24Z | 2023-08-17T10:10:42Z | null | 2023-08-17T10:10:58Z |
Backport PR #54535 on branch 2.1.x (REF: Replace "pyarrow" string storage checks with variable) | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 757ca817d1b85..5210e727aeb3c 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1996,3 +1996,8 @@ def warsaw(request) -> str:
tzinfo for Europe/Warsaw using pytz, dateutil, or zoneinfo.
"""
return request.param
+
+
+@pytest.fixture()
+def arrow_string_storage():
+ return ("pyarrow",)
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index cfd3314eb5944..de93e89ecacd5 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -115,8 +115,8 @@ def test_add(dtype):
tm.assert_series_equal(result, expected)
-def test_add_2d(dtype, request):
- if dtype.storage == "pyarrow":
+def test_add_2d(dtype, request, arrow_string_storage):
+ if dtype.storage in arrow_string_storage:
reason = "Failed: DID NOT RAISE <class 'ValueError'>"
mark = pytest.mark.xfail(raises=None, reason=reason)
request.node.add_marker(mark)
@@ -144,8 +144,8 @@ def test_add_sequence(dtype):
tm.assert_extension_array_equal(result, expected)
-def test_mul(dtype, request):
- if dtype.storage == "pyarrow":
+def test_mul(dtype, request, arrow_string_storage):
+ if dtype.storage in arrow_string_storage:
reason = "unsupported operand type(s) for *: 'ArrowStringArray' and 'int'"
mark = pytest.mark.xfail(raises=NotImplementedError, reason=reason)
request.node.add_marker(mark)
@@ -369,8 +369,8 @@ def test_min_max(method, skipna, dtype, request):
@pytest.mark.parametrize("method", ["min", "max"])
@pytest.mark.parametrize("box", [pd.Series, pd.array])
-def test_min_max_numpy(method, box, dtype, request):
- if dtype.storage == "pyarrow" and box is pd.array:
+def test_min_max_numpy(method, box, dtype, request, arrow_string_storage):
+ if dtype.storage in arrow_string_storage and box is pd.array:
if box is pd.array:
reason = "'<=' not supported between instances of 'str' and 'NoneType'"
else:
@@ -384,7 +384,7 @@ def test_min_max_numpy(method, box, dtype, request):
assert result == expected
-def test_fillna_args(dtype, request):
+def test_fillna_args(dtype, request, arrow_string_storage):
# GH 37987
arr = pd.array(["a", pd.NA], dtype=dtype)
@@ -397,7 +397,7 @@ def test_fillna_args(dtype, request):
expected = pd.array(["a", "b"], dtype=dtype)
tm.assert_extension_array_equal(res, expected)
- if dtype.storage == "pyarrow":
+ if dtype.storage in arrow_string_storage:
msg = "Invalid value '1' for dtype string"
else:
msg = "Cannot set non-string value '1' into a StringArray."
@@ -503,10 +503,10 @@ def test_use_inf_as_na(values, expected, dtype):
tm.assert_frame_equal(result, expected)
-def test_memory_usage(dtype):
+def test_memory_usage(dtype, arrow_string_storage):
# GH 33963
- if dtype.storage == "pyarrow":
+ if dtype.storage in arrow_string_storage:
pytest.skip(f"not applicable for {dtype.storage}")
series = pd.Series(["a", "b", "c"], dtype=dtype)
diff --git a/pandas/tests/arrays/string_/test_string_arrow.py b/pandas/tests/arrays/string_/test_string_arrow.py
index 6912d5038ae0d..1ab628f186b47 100644
--- a/pandas/tests/arrays/string_/test_string_arrow.py
+++ b/pandas/tests/arrays/string_/test_string_arrow.py
@@ -49,10 +49,10 @@ def test_config_bad_storage_raises():
@skip_if_no_pyarrow
@pytest.mark.parametrize("chunked", [True, False])
@pytest.mark.parametrize("array", ["numpy", "pyarrow"])
-def test_constructor_not_string_type_raises(array, chunked):
+def test_constructor_not_string_type_raises(array, chunked, arrow_string_storage):
import pyarrow as pa
- array = pa if array == "pyarrow" else np
+ array = pa if array in arrow_string_storage else np
arr = array.array([1, 2, 3])
if chunked:
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index 6597ff84e3ca4..4e142eb6e14b8 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -103,8 +103,8 @@ def test_is_not_string_type(self, dtype):
class TestInterface(base.BaseInterfaceTests):
- def test_view(self, data, request):
- if data.dtype.storage == "pyarrow":
+ def test_view(self, data, request, arrow_string_storage):
+ if data.dtype.storage in arrow_string_storage:
pytest.skip(reason="2D support not implemented for ArrowStringArray")
super().test_view(data)
@@ -116,8 +116,8 @@ def test_from_dtype(self, data):
class TestReshaping(base.BaseReshapingTests):
- def test_transpose(self, data, request):
- if data.dtype.storage == "pyarrow":
+ def test_transpose(self, data, request, arrow_string_storage):
+ if data.dtype.storage in arrow_string_storage:
pytest.skip(reason="2D support not implemented for ArrowStringArray")
super().test_transpose(data)
@@ -127,8 +127,8 @@ class TestGetitem(base.BaseGetitemTests):
class TestSetitem(base.BaseSetitemTests):
- def test_setitem_preserves_views(self, data, request):
- if data.dtype.storage == "pyarrow":
+ def test_setitem_preserves_views(self, data, request, arrow_string_storage):
+ if data.dtype.storage in arrow_string_storage:
pytest.skip(reason="2D support not implemented for ArrowStringArray")
super().test_setitem_preserves_views(data)
| Backport PR #54535: REF: Replace "pyarrow" string storage checks with variable | https://api.github.com/repos/pandas-dev/pandas/pulls/54594 | 2023-08-17T08:42:45Z | 2023-08-17T15:05:04Z | 2023-08-17T15:05:04Z | 2023-08-17T15:05:05Z |
DEPR: non keyword arguments in to_pickle | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c35473b852eb9..9295ad6cb9aa6 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,7 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
--
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7624c8f7c7930..88fa32ca5b20b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3017,6 +3017,9 @@ def to_sql(
)
@final
+ @deprecate_nonkeyword_arguments(
+ version=None, allowed_args=["self", "path"], name="to_pickle"
+ )
@doc(
storage_options=_shared_docs["storage_options"],
compression_options=_shared_docs["compression_options"] % "path",
| - [ ] xref #54229
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54593 | 2023-08-17T08:12:30Z | 2023-08-17T10:36:59Z | null | 2023-08-17T10:37:02Z |
Implement any and all for pyarrow numpy strings | diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 1df0be12a8127..cc3bc5900c4c2 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -554,3 +554,16 @@ def value_counts(self, dropna: bool = True):
return Series(
result._values.to_numpy(), index=result.index, name=result.name, copy=False
)
+
+ def _reduce(
+ self, name: str, *, skipna: bool = True, keepdims: bool = False, **kwargs
+ ):
+ if name in ["any", "all"]:
+ arr = pc.and_kleene(
+ pc.invert(pc.is_null(self._pa_array)), pc.not_equal(self._pa_array, "")
+ )
+ return ArrowExtensionArray(arr)._reduce(
+ name, skipna=skipna, keepdims=keepdims, **kwargs
+ )
+ else:
+ return super()._reduce(name, skipna=skipna, keepdims=keepdims, **kwargs)
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index d761d5081958b..840dd1057745f 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -158,7 +158,11 @@ def test_fillna_no_op_returns_copy(self, data):
class TestReduce(base.BaseReduceTests):
def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
- return op_name in ["min", "max"]
+ return (
+ op_name in ["min", "max"]
+ or ser.dtype.storage == "pyarrow_numpy" # type: ignore[union-attr]
+ and op_name in ("any", "all")
+ )
class TestMethods(base.BaseMethodsTests):
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 87892a81cef3d..021252500e814 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -1078,6 +1078,25 @@ def test_any_all_datetimelike(self):
assert df.any().all()
assert not df.all().any()
+ def test_any_all_pyarrow_string(self):
+ # GH#54591
+ pytest.importorskip("pyarrow")
+ ser = Series(["", "a"], dtype="string[pyarrow_numpy]")
+ assert ser.any()
+ assert not ser.all()
+
+ ser = Series([None, "a"], dtype="string[pyarrow_numpy]")
+ assert ser.any()
+ assert not ser.all()
+
+ ser = Series([None, ""], dtype="string[pyarrow_numpy]")
+ assert not ser.any()
+ assert not ser.all()
+
+ ser = Series(["a", "b"], dtype="string[pyarrow_numpy]")
+ assert ser.any()
+ assert ser.all()
+
def test_timedelta64_analytics(self):
# index min/max
dti = date_range("2012-1-1", periods=3, freq="D")
| This should work if we want to follow NumPy semantics as close as possible. I think this should work in general, but that's a discussion for another day | https://api.github.com/repos/pandas-dev/pandas/pulls/54591 | 2023-08-16T22:33:46Z | 2023-08-28T11:16:59Z | 2023-08-28T11:16:59Z | 2023-08-28T20:43:45Z |
Backport PR #54545 on branch 2.1.x (DOC: whatsnew 2.1.0 refinements) | diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
index 5a43e5796d1d9..41705620d4bc7 100644
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -314,6 +314,7 @@ Datetime properties
Series.dt.weekday
Series.dt.dayofyear
Series.dt.day_of_year
+ Series.dt.days_in_month
Series.dt.quarter
Series.dt.is_month_start
Series.dt.is_month_end
@@ -327,6 +328,7 @@ Datetime properties
Series.dt.tz
Series.dt.freq
Series.dt.unit
+ Series.dt.normalize
Datetime methods
^^^^^^^^^^^^^^^^
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index d1a689dc60830..a8004dfd506b0 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -44,7 +44,7 @@ This release introduces an option ``future.infer_string`` that infers all
strings as PyArrow backed strings with dtype ``pd.ArrowDtype(pa.string())`` instead.
This option only works if PyArrow is installed. PyArrow backed strings have a
significantly reduced memory footprint and provide a big performance improvement
-compared to NumPy object.
+compared to NumPy object (:issue:`54430`).
The option can be enabled with:
@@ -60,8 +60,8 @@ DataFrame reductions preserve extension dtypes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In previous versions of pandas, the results of DataFrame reductions
-(:meth:`DataFrame.sum` :meth:`DataFrame.mean` etc.) had numpy dtypes, even when the DataFrames
-were of extension dtypes. Pandas can now keep the dtypes when doing reductions over Dataframe
+(:meth:`DataFrame.sum` :meth:`DataFrame.mean` etc.) had NumPy dtypes, even when the DataFrames
+were of extension dtypes. Pandas can now keep the dtypes when doing reductions over DataFrame
columns with a common dtype (:issue:`52788`).
*Old Behavior*
@@ -90,9 +90,9 @@ columns with a common dtype (:issue:`52788`).
df = df.astype("int64[pyarrow]")
df.sum()
-Notice that the dtype is now a masked dtype and pyarrow dtype, respectively, while previously it was a numpy integer dtype.
+Notice that the dtype is now a masked dtype and PyArrow dtype, respectively, while previously it was a NumPy integer dtype.
-To allow Dataframe reductions to preserve extension dtypes, :meth:`ExtensionArray._reduce` has gotten a new keyword parameter ``keepdims``. Calling :meth:`ExtensionArray._reduce` with ``keepdims=True`` should return an array of length 1 along the reduction axis. In order to maintain backward compatibility, the parameter is not required, but will it become required in the future. If the parameter is not found in the signature, DataFrame reductions can not preserve extension dtypes. Also, if the parameter is not found, a ``FutureWarning`` will be emitted and type checkers like mypy may complain about the signature not being compatible with :meth:`ExtensionArray._reduce`.
+To allow DataFrame reductions to preserve extension dtypes, :meth:`.ExtensionArray._reduce` has gotten a new keyword parameter ``keepdims``. Calling :meth:`.ExtensionArray._reduce` with ``keepdims=True`` should return an array of length 1 along the reduction axis. In order to maintain backward compatibility, the parameter is not required, but will it become required in the future. If the parameter is not found in the signature, DataFrame reductions can not preserve extension dtypes. Also, if the parameter is not found, a ``FutureWarning`` will be emitted and type checkers like mypy may complain about the signature not being compatible with :meth:`.ExtensionArray._reduce`.
.. _whatsnew_210.enhancements.cow:
@@ -106,7 +106,7 @@ Copy-on-Write improvements
of Index objects and specifying ``copy=False``, will now use a lazy copy
of those Index objects for the columns of the DataFrame (:issue:`52947`)
- A shallow copy of a Series or DataFrame (``df.copy(deep=False)``) will now also return
- a shallow copy of the rows/columns ``Index`` objects instead of only a shallow copy of
+ a shallow copy of the rows/columns :class:`Index` objects instead of only a shallow copy of
the data, i.e. the index of the result is no longer identical
(``df.copy(deep=False).index is df.index`` is no longer True) (:issue:`53721`)
- :meth:`DataFrame.head` and :meth:`DataFrame.tail` will now return deep copies (:issue:`54011`)
@@ -130,8 +130,10 @@ Copy-on-Write improvements
.. _whatsnew_210.enhancements.map_na_action:
-``map(func, na_action="ignore")`` now works for all array types
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+New :meth:`DataFrame.map` method and support for ExtensionArrays
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :meth:`DataFrame.map` been added and :meth:`DataFrame.applymap` has been deprecated. :meth:`DataFrame.map` has the same functionality as :meth:`DataFrame.applymap`, but the new name better communicates that this is the :class:`DataFrame` version of :meth:`Series.map` (:issue:`52353`).
When given a callable, :meth:`Series.map` applies the callable to all elements of the :class:`Series`.
Similarly, :meth:`DataFrame.map` applies the callable to all elements of the :class:`DataFrame`,
@@ -139,8 +141,8 @@ while :meth:`Index.map` applies the callable to all elements of the :class:`Inde
Frequently, it is not desirable to apply the callable to nan-like values of the array and to avoid doing
that, the ``map`` method could be called with ``na_action="ignore"``, i.e. ``ser.map(func, na_action="ignore")``.
-However, ``na_action="ignore"`` was not implemented for many ``ExtensionArray`` and ``Index`` types
-and ``na_action="ignore"`` did not work correctly for any ``ExtensionArray`` subclass except the nullable numeric ones (i.e. with dtype :class:`Int64` etc.).
+However, ``na_action="ignore"`` was not implemented for many :class:`.ExtensionArray` and ``Index`` types
+and ``na_action="ignore"`` did not work correctly for any :class:`.ExtensionArray` subclass except the nullable numeric ones (i.e. with dtype :class:`Int64` etc.).
``na_action="ignore"`` now works for all array types (:issue:`52219`, :issue:`51645`, :issue:`51809`, :issue:`51936`, :issue:`52033`; :issue:`52096`).
@@ -172,11 +174,9 @@ and ``na_action="ignore"`` did not work correctly for any ``ExtensionArray`` sub
idx = pd.Index(ser)
idx.map(str.upper, na_action="ignore")
-Notice also that in this version, :meth:`DataFrame.map` been added and :meth:`DataFrame.applymap` has been deprecated. :meth:`DataFrame.map` has the same functionality as :meth:`DataFrame.applymap`, but the new name better communicate that this is the :class:`DataFrame` version of :meth:`Series.map` (:issue:`52353`).
-
Also, note that :meth:`Categorical.map` implicitly has had its ``na_action`` set to ``"ignore"`` by default.
-This has been deprecated and will :meth:`Categorical.map` in the future change the default
-to ``na_action=None``, like for all the other array types.
+This has been deprecated and the default for :meth:`Categorical.map` will change
+to ``na_action=None``, consistent with all the other array types.
.. _whatsnew_210.enhancements.new_stack:
@@ -222,8 +222,9 @@ If the input contains NA values, the previous version would drop those as well w
Other enhancements
^^^^^^^^^^^^^^^^^^
- :meth:`Series.ffill` and :meth:`Series.bfill` are now supported for objects with :class:`IntervalDtype` (:issue:`54247`)
-- :meth:`Categorical.map` and :meth:`CategoricalIndex.map` now have a ``na_action`` parameter.
- :meth:`Categorical.map` implicitly had a default value of ``"ignore"`` for ``na_action``. This has formally been deprecated and will be changed to ``None`` in the future.
+- Added ``filters`` parameter to :func:`read_parquet` to filter out data, compatible with both ``engines`` (:issue:`53212`)
+- :meth:`.Categorical.map` and :meth:`CategoricalIndex.map` now have a ``na_action`` parameter.
+ :meth:`.Categorical.map` implicitly had a default value of ``"ignore"`` for ``na_action``. This has formally been deprecated and will be changed to ``None`` in the future.
Also notice that :meth:`Series.map` has default ``na_action=None`` and calls to series with categorical data will now use ``na_action=None`` unless explicitly set otherwise (:issue:`44279`)
- :class:`api.extensions.ExtensionArray` now has a :meth:`~api.extensions.ExtensionArray.map` method (:issue:`51809`)
- :meth:`DataFrame.applymap` now uses the :meth:`~api.extensions.ExtensionArray.map` method of underlying :class:`api.extensions.ExtensionArray` instances (:issue:`52219`)
@@ -231,49 +232,41 @@ Other enhancements
- :meth:`MultiIndex.sortlevel` and :meth:`Index.sortlevel` gained a new keyword ``na_position`` (:issue:`51612`)
- :meth:`arrays.DatetimeArray.map`, :meth:`arrays.TimedeltaArray.map` and :meth:`arrays.PeriodArray.map` can now take a ``na_action`` argument (:issue:`51644`)
- :meth:`arrays.SparseArray.map` now supports ``na_action`` (:issue:`52096`).
-- :meth:`pandas.read_html` now supports the ``storage_options`` keyword when used with a URL, allowing users to add headers the outbound HTTP request (:issue:`49944`)
-- Add :meth:`diff()` and :meth:`round()` for :class:`Index` (:issue:`19708`)
+- :meth:`pandas.read_html` now supports the ``storage_options`` keyword when used with a URL, allowing users to add headers to the outbound HTTP request (:issue:`49944`)
+- Add :meth:`Index.diff` and :meth:`Index.round` (:issue:`19708`)
+- Add ``"latex-math"`` as an option to the ``escape`` argument of :class:`.Styler` which will not escape all characters between ``"\("`` and ``"\)"`` during formatting (:issue:`51903`)
- Add dtype of categories to ``repr`` information of :class:`CategoricalDtype` (:issue:`52179`)
-- Added to the escape mode "latex-math" preserving without escaping all characters between "\(" and "\)" in formatter (:issue:`51903`)
- Adding ``engine_kwargs`` parameter to :func:`read_excel` (:issue:`52214`)
- Classes that are useful for type-hinting have been added to the public API in the new submodule ``pandas.api.typing`` (:issue:`48577`)
-- Implemented :attr:`Series.dt.is_month_start`, :attr:`Series.dt.is_month_end`, :attr:`Series.dt.is_year_start`, :attr:`Series.dt.is_year_end`, :attr:`Series.dt.is_quarter_start`, :attr:`Series.dt.is_quarter_end`, :attr:`Series.dt.is_days_in_month`, :attr:`Series.dt.unit`, :attr:`Series.dt.is_normalize`, :meth:`Series.dt.day_name`, :meth:`Series.dt.month_name`, :meth:`Series.dt.tz_convert` for :class:`ArrowDtype` with ``pyarrow.timestamp`` (:issue:`52388`, :issue:`51718`)
-- :meth:`.DataFrameGroupby.agg` and :meth:`.DataFrameGroupby.transform` now support grouping by multiple keys when the index is not a :class:`MultiIndex` for ``engine="numba"`` (:issue:`53486`)
-- :meth:`.SeriesGroupby.agg` and :meth:`.DataFrameGroupby.agg` now support passing in multiple functions for ``engine="numba"`` (:issue:`53486`)
-- :meth:`.SeriesGroupby.transform` and :meth:`.DataFrameGroupby.transform` now support passing in a string as the function for ``engine="numba"`` (:issue:`53579`)
-- :meth:`Categorical.from_codes` has gotten a ``validate`` parameter (:issue:`50975`)
+- Implemented :attr:`Series.dt.is_month_start`, :attr:`Series.dt.is_month_end`, :attr:`Series.dt.is_year_start`, :attr:`Series.dt.is_year_end`, :attr:`Series.dt.is_quarter_start`, :attr:`Series.dt.is_quarter_end`, :attr:`Series.dt.days_in_month`, :attr:`Series.dt.unit`, :attr:`Series.dt.normalize`, :meth:`Series.dt.day_name`, :meth:`Series.dt.month_name`, :meth:`Series.dt.tz_convert` for :class:`ArrowDtype` with ``pyarrow.timestamp`` (:issue:`52388`, :issue:`51718`)
+- :meth:`.DataFrameGroupBy.agg` and :meth:`.DataFrameGroupBy.transform` now support grouping by multiple keys when the index is not a :class:`MultiIndex` for ``engine="numba"`` (:issue:`53486`)
+- :meth:`.SeriesGroupBy.agg` and :meth:`.DataFrameGroupBy.agg` now support passing in multiple functions for ``engine="numba"`` (:issue:`53486`)
+- :meth:`.SeriesGroupBy.transform` and :meth:`.DataFrameGroupBy.transform` now support passing in a string as the function for ``engine="numba"`` (:issue:`53579`)
- :meth:`DataFrame.stack` gained the ``sort`` keyword to dictate whether the resulting :class:`MultiIndex` levels are sorted (:issue:`15105`)
- :meth:`DataFrame.unstack` gained the ``sort`` keyword to dictate whether the resulting :class:`MultiIndex` levels are sorted (:issue:`15105`)
-- :meth:`Series.explode` now supports pyarrow-backed list types (:issue:`53602`)
+- :meth:`Series.explode` now supports PyArrow-backed list types (:issue:`53602`)
- :meth:`Series.str.join` now supports ``ArrowDtype(pa.string())`` (:issue:`53646`)
-- Added :meth:`ExtensionArray.interpolate` used by :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` (:issue:`53659`)
+- Add ``validate`` parameter to :meth:`Categorical.from_codes` (:issue:`50975`)
+- Added :meth:`.ExtensionArray.interpolate` used by :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` (:issue:`53659`)
- Added ``engine_kwargs`` parameter to :meth:`DataFrame.to_excel` (:issue:`53220`)
- Implemented :func:`api.interchange.from_dataframe` for :class:`DatetimeTZDtype` (:issue:`54239`)
-- Implemented ``__from_arrow__`` on :class:`DatetimeTZDtype`. (:issue:`52201`)
-- Implemented ``__pandas_priority__`` to allow custom types to take precedence over :class:`DataFrame`, :class:`Series`, :class:`Index`, or :class:`ExtensionArray` for arithmetic operations, :ref:`see the developer guide <extending.pandas_priority>` (:issue:`48347`)
+- Implemented ``__from_arrow__`` on :class:`DatetimeTZDtype` (:issue:`52201`)
+- Implemented ``__pandas_priority__`` to allow custom types to take precedence over :class:`DataFrame`, :class:`Series`, :class:`Index`, or :class:`.ExtensionArray` for arithmetic operations, :ref:`see the developer guide <extending.pandas_priority>` (:issue:`48347`)
- Improve error message when having incompatible columns using :meth:`DataFrame.merge` (:issue:`51861`)
- Improve error message when setting :class:`DataFrame` with wrong number of columns through :meth:`DataFrame.isetitem` (:issue:`51701`)
- Improved error handling when using :meth:`DataFrame.to_json` with incompatible ``index`` and ``orient`` arguments (:issue:`52143`)
-- Improved error message when creating a DataFrame with empty data (0 rows), no index and an incorrect number of columns. (:issue:`52084`)
-- Improved error message when providing an invalid ``index`` or ``offset`` argument to :class:`pandas.api.indexers.VariableOffsetWindowIndexer` (:issue:`54379`)
+- Improved error message when creating a DataFrame with empty data (0 rows), no index and an incorrect number of columns (:issue:`52084`)
+- Improved error message when providing an invalid ``index`` or ``offset`` argument to :class:`.VariableOffsetWindowIndexer` (:issue:`54379`)
- Let :meth:`DataFrame.to_feather` accept a non-default :class:`Index` and non-string column names (:issue:`51787`)
- Added a new parameter ``by_row`` to :meth:`Series.apply` and :meth:`DataFrame.apply`. When set to ``False`` the supplied callables will always operate on the whole Series or DataFrame (:issue:`53400`, :issue:`53601`).
- :meth:`DataFrame.shift` and :meth:`Series.shift` now allow shifting by multiple periods by supplying a list of periods (:issue:`44424`)
-- Groupby aggregations (such as :meth:`.DataFrameGroupby.sum`) now can preserve the dtype of the input instead of casting to ``float64`` (:issue:`44952`)
+- Groupby aggregations with ``numba`` (such as :meth:`.DataFrameGroupBy.sum`) now can preserve the dtype of the input instead of casting to ``float64`` (:issue:`44952`)
- Improved error message when :meth:`.DataFrameGroupBy.agg` failed (:issue:`52930`)
-- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to lzma.LZMAFile (:issue:`52979`)
-- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype objects (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
+- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to ``lzma.LZMAFile`` (:issue:`52979`)
+- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
-- Performance improvement in :meth:`.GroupBy.quantile` (:issue:`51722`)
-
-.. ---------------------------------------------------------------------------
-.. _whatsnew_210.notable_bug_fixes:
-
-Notable bug fixes
-~~~~~~~~~~~~~~~~~
-
-These are bug fixes that might have notable behavior changes.
+- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
.. ---------------------------------------------------------------------------
.. _whatsnew_210.api_breaking:
@@ -363,7 +356,7 @@ See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for mor
Other API changes
^^^^^^^^^^^^^^^^^
-- :class:`arrays.PandasArray` has been renamed ``NumpyExtensionArray`` and the attached dtype name changed from ``PandasDtype`` to ``NumpyEADtype``; importing ``PandasArray`` still works until the next major version (:issue:`53694`)
+- :class:`arrays.PandasArray` has been renamed :class:`.NumpyExtensionArray` and the attached dtype name changed from ``PandasDtype`` to ``NumpyEADtype``; importing ``PandasArray`` still works until the next major version (:issue:`53694`)
.. ---------------------------------------------------------------------------
.. _whatsnew_210.deprecations:
@@ -374,6 +367,8 @@ Deprecations
Deprecated silent upcasting in setitem-like Series operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+PDEP-6: https://pandas.pydata.org/pdeps/0006-ban-upcasting.html
+
Setitem-like operations on Series (or DataFrame columns) which silently upcast the dtype are
deprecated and show a warning. Examples of affected operations are:
@@ -506,34 +501,33 @@ and ``datetime.datetime.strptime``:
Other Deprecations
^^^^^^^^^^^^^^^^^^
-- Deprecated 'broadcast_axis' keyword in :meth:`Series.align` and :meth:`DataFrame.align`, upcast before calling ``align`` with ``left = DataFrame({col: left for col in right.columns}, index=right.index)`` (:issue:`51856`)
-- Deprecated 'downcast' keyword in :meth:`Index.fillna` (:issue:`53956`)
-- Deprecated 'fill_method' and 'limit' keywords in :meth:`DataFrame.pct_change`, :meth:`Series.pct_change`, :meth:`.DataFrameGroupBy.pct_change`, and :meth:`.SeriesGroupBy.pct_change`, explicitly call ``ffill`` or ``bfill`` before calling ``pct_change`` instead (:issue:`53491`)
-- Deprecated 'method', 'limit', and 'fill_axis' keywords in :meth:`DataFrame.align` and :meth:`Series.align`, explicitly call ``fillna`` on the alignment results instead (:issue:`51856`)
-- Deprecated 'quantile' keyword in :meth:`.Rolling.quantile` and :meth:`.Expanding.quantile`, renamed as 'q' instead (:issue:`52550`)
- Deprecated :attr:`.DataFrameGroupBy.dtypes`, check ``dtypes`` on the underlying object instead (:issue:`51045`)
- Deprecated :attr:`DataFrame._data` and :attr:`Series._data`, use public APIs instead (:issue:`33333`)
- Deprecated :func:`concat` behavior when any of the objects being concatenated have length 0; in the past the dtypes of empty objects were ignored when determining the resulting dtype, in a future version they will not (:issue:`39122`)
+- Deprecated :meth:`.Categorical.to_list`, use ``obj.tolist()`` instead (:issue:`51254`)
- Deprecated :meth:`.DataFrameGroupBy.all` and :meth:`.DataFrameGroupBy.any` with datetime64 or :class:`PeriodDtype` values, matching the :class:`Series` and :class:`DataFrame` deprecations (:issue:`34479`)
-- Deprecated :meth:`.DataFrameGroupBy.apply` and methods on the objects returned by :meth:`.DataFrameGroupBy.resample` operating on the grouping column(s); select the columns to operate on after groupby to either explicitly include or exclude the groupings and avoid the ``FutureWarning`` (:issue:`7155`)
-- Deprecated :meth:`Categorical.to_list`, use ``obj.tolist()`` instead (:issue:`51254`)
- Deprecated ``axis=1`` in :meth:`DataFrame.ewm`, :meth:`DataFrame.rolling`, :meth:`DataFrame.expanding`, transpose before calling the method instead (:issue:`51778`)
- Deprecated ``axis=1`` in :meth:`DataFrame.groupby` and in :class:`Grouper` constructor, do ``frame.T.groupby(...)`` instead (:issue:`51203`)
+- Deprecated ``broadcast_axis`` keyword in :meth:`Series.align` and :meth:`DataFrame.align`, upcast before calling ``align`` with ``left = DataFrame({col: left for col in right.columns}, index=right.index)`` (:issue:`51856`)
+- Deprecated ``downcast`` keyword in :meth:`Index.fillna` (:issue:`53956`)
+- Deprecated ``fill_method`` and ``limit`` keywords in :meth:`DataFrame.pct_change`, :meth:`Series.pct_change`, :meth:`.DataFrameGroupBy.pct_change`, and :meth:`.SeriesGroupBy.pct_change`, explicitly call e.g. :meth:`DataFrame.ffill` or :meth:`DataFrame.bfill` before calling ``pct_change`` instead (:issue:`53491`)
+- Deprecated ``method``, ``limit``, and ``fill_axis`` keywords in :meth:`DataFrame.align` and :meth:`Series.align`, explicitly call :meth:`DataFrame.fillna` or :meth:`Series.fillna` on the alignment results instead (:issue:`51856`)
+- Deprecated ``quantile`` keyword in :meth:`.Rolling.quantile` and :meth:`.Expanding.quantile`, renamed to ``q`` instead (:issue:`52550`)
- Deprecated accepting slices in :meth:`DataFrame.take`, call ``obj[slicer]`` or pass a sequence of integers instead (:issue:`51539`)
- Deprecated behavior of :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin` in with all-NA entries or any-NA and ``skipna=False``; in a future version these will raise ``ValueError`` (:issue:`51276`)
- Deprecated explicit support for subclassing :class:`Index` (:issue:`45289`)
-- Deprecated making functions given to :meth:`Series.agg` attempt to operate on each element in the :class:`Series` and only operate on the whole :class:`Series` if the elementwise operations failed. In the future, functions given to :meth:`Series.agg` will always operate on the whole :class:`Series` only. To keep the current behavior, use :meth:`Series.transform` instead. (:issue:`53325`)
-- Deprecated making the functions in a list of functions given to :meth:`DataFrame.agg` attempt to operate on each element in the :class:`DataFrame` and only operate on the columns of the :class:`DataFrame` if the elementwise operations failed. To keep the current behavior, use :meth:`DataFrame.transform` instead. (:issue:`53325`)
+- Deprecated making functions given to :meth:`Series.agg` attempt to operate on each element in the :class:`Series` and only operate on the whole :class:`Series` if the elementwise operations failed. In the future, functions given to :meth:`Series.agg` will always operate on the whole :class:`Series` only. To keep the current behavior, use :meth:`Series.transform` instead (:issue:`53325`)
+- Deprecated making the functions in a list of functions given to :meth:`DataFrame.agg` attempt to operate on each element in the :class:`DataFrame` and only operate on the columns of the :class:`DataFrame` if the elementwise operations failed. To keep the current behavior, use :meth:`DataFrame.transform` instead (:issue:`53325`)
- Deprecated passing a :class:`DataFrame` to :meth:`DataFrame.from_records`, use :meth:`DataFrame.set_index` or :meth:`DataFrame.drop` instead (:issue:`51353`)
- Deprecated silently dropping unrecognized timezones when parsing strings to datetimes (:issue:`18702`)
-- Deprecated the "downcast" keyword in :meth:`Series.interpolate`, :meth:`DataFrame.interpolate`, :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`Series.ffill`, :meth:`DataFrame.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.bfill` (:issue:`40988`)
- Deprecated the ``axis`` keyword in :meth:`DataFrame.ewm`, :meth:`Series.ewm`, :meth:`DataFrame.rolling`, :meth:`Series.rolling`, :meth:`DataFrame.expanding`, :meth:`Series.expanding` (:issue:`51778`)
- Deprecated the ``axis`` keyword in :meth:`DataFrame.resample`, :meth:`Series.resample` (:issue:`51778`)
+- Deprecated the ``downcast`` keyword in :meth:`Series.interpolate`, :meth:`DataFrame.interpolate`, :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`Series.ffill`, :meth:`DataFrame.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.bfill` (:issue:`40988`)
- Deprecated the behavior of :func:`concat` with both ``len(keys) != len(objs)``, in a future version this will raise instead of truncating to the shorter of the two sequences (:issue:`43485`)
- Deprecated the behavior of :meth:`Series.argsort` in the presence of NA values; in a future version these will be sorted at the end instead of giving -1 (:issue:`54219`)
- Deprecated the default of ``observed=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby`; this will default to ``True`` in a future version (:issue:`43999`)
-- Deprecating pinning ``group.name`` to each group in :meth:`SeriesGroupBy.aggregate` aggregations; if your operation requires utilizing the groupby keys, iterate over the groupby object instead (:issue:`41090`)
-- Deprecated the 'axis' keyword in :meth:`.DataFrameGroupBy.idxmax`, :meth:`.DataFrameGroupBy.idxmin`, :meth:`.DataFrameGroupBy.fillna`, :meth:`.DataFrameGroupBy.take`, :meth:`.DataFrameGroupBy.skew`, :meth:`.DataFrameGroupBy.rank`, :meth:`.DataFrameGroupBy.cumprod`, :meth:`.DataFrameGroupBy.cumsum`, :meth:`.DataFrameGroupBy.cummax`, :meth:`.DataFrameGroupBy.cummin`, :meth:`.DataFrameGroupBy.pct_change`, :meth:`DataFrameGroupBy.diff`, :meth:`.DataFrameGroupBy.shift`, and :meth:`DataFrameGroupBy.corrwith`; for ``axis=1`` operate on the underlying :class:`DataFrame` instead (:issue:`50405`, :issue:`51046`)
+- Deprecating pinning ``group.name`` to each group in :meth:`.SeriesGroupBy.aggregate` aggregations; if your operation requires utilizing the groupby keys, iterate over the groupby object instead (:issue:`41090`)
+- Deprecated the ``axis`` keyword in :meth:`.DataFrameGroupBy.idxmax`, :meth:`.DataFrameGroupBy.idxmin`, :meth:`.DataFrameGroupBy.fillna`, :meth:`.DataFrameGroupBy.take`, :meth:`.DataFrameGroupBy.skew`, :meth:`.DataFrameGroupBy.rank`, :meth:`.DataFrameGroupBy.cumprod`, :meth:`.DataFrameGroupBy.cumsum`, :meth:`.DataFrameGroupBy.cummax`, :meth:`.DataFrameGroupBy.cummin`, :meth:`.DataFrameGroupBy.pct_change`, :meth:`.DataFrameGroupBy.diff`, :meth:`.DataFrameGroupBy.shift`, and :meth:`.DataFrameGroupBy.corrwith`; for ``axis=1`` operate on the underlying :class:`DataFrame` instead (:issue:`50405`, :issue:`51046`)
- Deprecated :class:`.DataFrameGroupBy` with ``as_index=False`` not including groupings in the result when they are not columns of the DataFrame (:issue:`49519`)
- Deprecated :func:`is_categorical_dtype`, use ``isinstance(obj.dtype, pd.CategoricalDtype)`` instead (:issue:`52527`)
- Deprecated :func:`is_datetime64tz_dtype`, check ``isinstance(dtype, pd.DatetimeTZDtype)`` instead (:issue:`52607`)
@@ -545,49 +539,49 @@ Other Deprecations
- Deprecated :meth:`.Styler.applymap`. Use the new :meth:`.Styler.map` method instead (:issue:`52708`)
- Deprecated :meth:`DataFrame.applymap`. Use the new :meth:`DataFrame.map` method instead (:issue:`52353`)
- Deprecated :meth:`DataFrame.swapaxes` and :meth:`Series.swapaxes`, use :meth:`DataFrame.transpose` or :meth:`Series.transpose` instead (:issue:`51946`)
-- Deprecated ``freq`` parameter in :class:`PeriodArray` constructor, pass ``dtype`` instead (:issue:`52462`)
-- Deprecated allowing non-standard inputs in :func:`take`, pass either a ``numpy.ndarray``, :class:`ExtensionArray`, :class:`Index`, or :class:`Series` (:issue:`52981`)
-- Deprecated allowing non-standard sequences for :func:`isin`, :func:`value_counts`, :func:`unique`, :func:`factorize`, case to one of ``numpy.ndarray``, :class:`Index`, :class:`ExtensionArray`, or :class:`Series` before calling (:issue:`52986`)
+- Deprecated ``freq`` parameter in :class:`.PeriodArray` constructor, pass ``dtype`` instead (:issue:`52462`)
+- Deprecated allowing non-standard inputs in :func:`take`, pass either a ``numpy.ndarray``, :class:`.ExtensionArray`, :class:`Index`, or :class:`Series` (:issue:`52981`)
+- Deprecated allowing non-standard sequences for :func:`isin`, :func:`value_counts`, :func:`unique`, :func:`factorize`, case to one of ``numpy.ndarray``, :class:`Index`, :class:`.ExtensionArray`, or :class:`Series` before calling (:issue:`52986`)
- Deprecated behavior of :class:`DataFrame` reductions ``sum``, ``prod``, ``std``, ``var``, ``sem`` with ``axis=None``, in a future version this will operate over both axes returning a scalar instead of behaving like ``axis=0``; note this also affects numpy functions e.g. ``np.sum(df)`` (:issue:`21597`)
- Deprecated behavior of :func:`concat` when :class:`DataFrame` has columns that are all-NA, in a future version these will not be discarded when determining the resulting dtype (:issue:`40893`)
-- Deprecated behavior of :meth:`Series.dt.to_pydatetime`, in a future version this will return a :class:`Series` containing python ``datetime`` objects instead of an ``ndarray`` of datetimes; this matches the behavior of other :meth:`Series.dt` properties (:issue:`20306`)
-- Deprecated logical operations (``|``, ``&``, ``^``) between pandas objects and dtype-less sequences (e.g. ``list``, ``tuple``), wrap a sequence in a :class:`Series` or numpy array before operating instead (:issue:`51521`)
+- Deprecated behavior of :meth:`Series.dt.to_pydatetime`, in a future version this will return a :class:`Series` containing python ``datetime`` objects instead of an ``ndarray`` of datetimes; this matches the behavior of other :attr:`Series.dt` properties (:issue:`20306`)
+- Deprecated logical operations (``|``, ``&``, ``^``) between pandas objects and dtype-less sequences (e.g. ``list``, ``tuple``), wrap a sequence in a :class:`Series` or NumPy array before operating instead (:issue:`51521`)
- Deprecated making :meth:`Series.apply` return a :class:`DataFrame` when the passed-in callable returns a :class:`Series` object. In the future this will return a :class:`Series` whose values are themselves :class:`Series`. This pattern was very slow and it's recommended to use alternative methods to archive the same goal (:issue:`52116`)
- Deprecated parameter ``convert_type`` in :meth:`Series.apply` (:issue:`52140`)
- Deprecated passing a dictionary to :meth:`.SeriesGroupBy.agg`; pass a list of aggregations instead (:issue:`50684`)
-- Deprecated the "fastpath" keyword in :class:`Categorical` constructor, use :meth:`Categorical.from_codes` instead (:issue:`20110`)
+- Deprecated the ``fastpath`` keyword in :class:`Categorical` constructor, use :meth:`Categorical.from_codes` instead (:issue:`20110`)
- Deprecated the behavior of :func:`is_bool_dtype` returning ``True`` for object-dtype :class:`Index` of bool objects (:issue:`52680`)
- Deprecated the methods :meth:`Series.bool` and :meth:`DataFrame.bool` (:issue:`51749`)
-- Deprecated unused "closed" and "normalize" keywords in the :class:`DatetimeIndex` constructor (:issue:`52628`)
-- Deprecated unused "closed" keyword in the :class:`TimedeltaIndex` constructor (:issue:`52628`)
-- Deprecated logical operation between two non boolean :class:`Series` with different indexes always coercing the result to bool dtype. In a future version, this will maintain the return type of the inputs. (:issue:`52500`, :issue:`52538`)
+- Deprecated unused ``closed`` and ``normalize`` keywords in the :class:`DatetimeIndex` constructor (:issue:`52628`)
+- Deprecated unused ``closed`` keyword in the :class:`TimedeltaIndex` constructor (:issue:`52628`)
+- Deprecated logical operation between two non boolean :class:`Series` with different indexes always coercing the result to bool dtype. In a future version, this will maintain the return type of the inputs (:issue:`52500`, :issue:`52538`)
- Deprecated :class:`Period` and :class:`PeriodDtype` with ``BDay`` freq, use a :class:`DatetimeIndex` with ``BDay`` freq instead (:issue:`53446`)
- Deprecated :func:`value_counts`, use ``pd.Series(obj).value_counts()`` instead (:issue:`47862`)
-- Deprecated :meth:`Series.first` and :meth:`DataFrame.first` (please create a mask and filter using ``.loc`` instead) (:issue:`45908`)
+- Deprecated :meth:`Series.first` and :meth:`DataFrame.first`; create a mask and filter using ``.loc`` instead (:issue:`45908`)
- Deprecated :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` for object-dtype (:issue:`53631`)
-- Deprecated :meth:`Series.last` and :meth:`DataFrame.last` (please create a mask and filter using ``.loc`` instead) (:issue:`53692`)
+- Deprecated :meth:`Series.last` and :meth:`DataFrame.last`; create a mask and filter using ``.loc`` instead (:issue:`53692`)
- Deprecated allowing arbitrary ``fill_value`` in :class:`SparseDtype`, in a future version the ``fill_value`` will need to be compatible with the ``dtype.subtype``, either a scalar that can be held by that subtype or ``NaN`` for integer or bool subtypes (:issue:`23124`)
- Deprecated allowing bool dtype in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile`, consistent with the :meth:`Series.quantile` and :meth:`DataFrame.quantile` behavior (:issue:`51424`)
- Deprecated behavior of :func:`.testing.assert_series_equal` and :func:`.testing.assert_frame_equal` considering NA-like values (e.g. ``NaN`` vs ``None`` as equivalent) (:issue:`52081`)
-- Deprecated bytes input to :func:`read_excel`. To read a file path, use a string or path-like object. (:issue:`53767`)
-- Deprecated constructing :class:`SparseArray` from scalar data, pass a sequence instead (:issue:`53039`)
+- Deprecated bytes input to :func:`read_excel`. To read a file path, use a string or path-like object (:issue:`53767`)
+- Deprecated constructing :class:`.SparseArray` from scalar data, pass a sequence instead (:issue:`53039`)
- Deprecated falling back to filling when ``value`` is not specified in :meth:`DataFrame.replace` and :meth:`Series.replace` with non-dict-like ``to_replace`` (:issue:`33302`)
-- Deprecated literal json input to :func:`read_json`. Wrap literal json string input in ``io.StringIO`` instead. (:issue:`53409`)
-- Deprecated literal string input to :func:`read_xml`. Wrap literal string/bytes input in ``io.StringIO`` / ``io.BytesIO`` instead. (:issue:`53767`)
-- Deprecated literal string/bytes input to :func:`read_html`. Wrap literal string/bytes input in ``io.StringIO`` / ``io.BytesIO`` instead. (:issue:`53767`)
-- Deprecated option "mode.use_inf_as_na", convert inf entries to ``NaN`` before instead (:issue:`51684`)
+- Deprecated literal json input to :func:`read_json`. Wrap literal json string input in ``io.StringIO`` instead (:issue:`53409`)
+- Deprecated literal string input to :func:`read_xml`. Wrap literal string/bytes input in ``io.StringIO`` / ``io.BytesIO`` instead (:issue:`53767`)
+- Deprecated literal string/bytes input to :func:`read_html`. Wrap literal string/bytes input in ``io.StringIO`` / ``io.BytesIO`` instead (:issue:`53767`)
+- Deprecated option ``mode.use_inf_as_na``, convert inf entries to ``NaN`` before instead (:issue:`51684`)
- Deprecated parameter ``obj`` in :meth:`.DataFrameGroupBy.get_group` (:issue:`53545`)
- Deprecated positional indexing on :class:`Series` with :meth:`Series.__getitem__` and :meth:`Series.__setitem__`, in a future version ``ser[item]`` will *always* interpret ``item`` as a label, not a position (:issue:`50617`)
- Deprecated replacing builtin and NumPy functions in ``.agg``, ``.apply``, and ``.transform``; use the corresponding string alias (e.g. ``"sum"`` for ``sum`` or ``np.sum``) instead (:issue:`53425`)
- Deprecated strings ``T``, ``t``, ``L`` and ``l`` denoting units in :func:`to_timedelta` (:issue:`52536`)
-- Deprecated the "method" and "limit" keywords in ``ExtensionArray.fillna``, implement and use ``pad_or_backfill`` instead (:issue:`53621`)
-- Deprecated the "method" and "limit" keywords on :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`.SeriesGroupBy.fillna`, :meth:`.DataFrameGroupBy.fillna`, and :meth:`.Resampler.fillna`, use ``obj.bfill()`` or ``obj.ffill()`` instead (:issue:`53394`)
+- Deprecated the "method" and "limit" keywords in ``.ExtensionArray.fillna``, implement and use ``pad_or_backfill`` instead (:issue:`53621`)
- Deprecated the ``method`` and ``limit`` keywords in :meth:`DataFrame.replace` and :meth:`Series.replace` (:issue:`33302`)
+- Deprecated the ``method`` and ``limit`` keywords on :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`.SeriesGroupBy.fillna`, :meth:`.DataFrameGroupBy.fillna`, and :meth:`.Resampler.fillna`, use ``obj.bfill()`` or ``obj.ffill()`` instead (:issue:`53394`)
- Deprecated the behavior of :meth:`Series.__getitem__`, :meth:`Series.__setitem__`, :meth:`DataFrame.__getitem__`, :meth:`DataFrame.__setitem__` with an integer slice on objects with a floating-dtype index, in a future version this will be treated as *positional* indexing (:issue:`49612`)
- Deprecated the use of non-supported datetime64 and timedelta64 resolutions with :func:`pandas.array`. Supported resolutions are: "s", "ms", "us", "ns" resolutions (:issue:`53058`)
-- Deprecated values "pad", "ffill", "bfill", "backfill" for :meth:`Series.interpolate` and :meth:`DataFrame.interpolate`, use ``obj.ffill()`` or ``obj.bfill()`` instead (:issue:`53581`)
-- Deprecated the behavior of :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`Series.argmax`, :meth:`Series.argmin` with either all-NAs and skipna=True or any-NAs and skipna=False returning -1; in a future version this will raise ``ValueError`` (:issue:`33941`, :issue:`33942`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_sql` except ``name``. (:issue:`54229`)
+- Deprecated values ``"pad"``, ``"ffill"``, ``"bfill"``, ``"backfill"`` for :meth:`Series.interpolate` and :meth:`DataFrame.interpolate`, use ``obj.ffill()`` or ``obj.bfill()`` instead (:issue:`53581`)
+- Deprecated the behavior of :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`Series.argmax`, :meth:`Series.argmin` with either all-NAs and ``skipna=True`` or any-NAs and ``skipna=False`` returning -1; in a future version this will raise ``ValueError`` (:issue:`33941`, :issue:`33942`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_sql` except ``name`` (:issue:`54229`)
.. ---------------------------------------------------------------------------
.. _whatsnew_210.performance:
@@ -596,7 +590,7 @@ Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- Performance improvement in :func:`concat` with homogeneous ``np.float64`` or ``np.float32`` dtypes (:issue:`52685`)
- Performance improvement in :func:`factorize` for object columns not containing strings (:issue:`51921`)
-- Performance improvement in :func:`read_orc` when reading a remote URI file path. (:issue:`51609`)
+- Performance improvement in :func:`read_orc` when reading a remote URI file path (:issue:`51609`)
- Performance improvement in :func:`read_parquet` and :meth:`DataFrame.to_parquet` when reading a remote file with ``engine="pyarrow"`` (:issue:`51609`)
- Performance improvement in :func:`read_parquet` on string columns when using ``use_nullable_dtypes=True`` (:issue:`47345`)
- Performance improvement in :meth:`DataFrame.clip` and :meth:`Series.clip` (:issue:`51472`)
@@ -611,9 +605,9 @@ Performance improvements
- Performance improvement when parsing strings to ``boolean[pyarrow]`` dtype (:issue:`51730`)
- Performance improvement when searching an :class:`Index` sliced from other indexes (:issue:`51738`)
- Performance improvement in :func:`concat` (:issue:`52291`, :issue:`52290`)
-- :class:`Period`'s default formatter (`period_format`) is now significantly (~twice) faster. This improves performance of ``str(Period)``, ``repr(Period)``, and :meth:`Period.strftime(fmt=None)`, as well as ``PeriodArray.strftime(fmt=None)``, ``PeriodIndex.strftime(fmt=None)`` and ``PeriodIndex.format(fmt=None)``. Finally, ``to_csv`` operations involving :class:`PeriodArray` or :class:`PeriodIndex` with default ``date_format`` are also significantly accelerated. (:issue:`51459`)
+- :class:`Period`'s default formatter (``period_format``) is now significantly (~twice) faster. This improves performance of ``str(Period)``, ``repr(Period)``, and :meth:`.Period.strftime(fmt=None)`, as well as ``.PeriodArray.strftime(fmt=None)``, ``.PeriodIndex.strftime(fmt=None)`` and ``.PeriodIndex.format(fmt=None)``. ``to_csv`` operations involving :class:`.PeriodArray` or :class:`PeriodIndex` with default ``date_format`` are also significantly accelerated (:issue:`51459`)
- Performance improvement accessing :attr:`arrays.IntegerArrays.dtype` & :attr:`arrays.FloatingArray.dtype` (:issue:`52998`)
-- Performance improvement for :class:`DataFrameGroupBy`/:class:`SeriesGroupBy` aggregations (e.g. :meth:`DataFrameGroupBy.sum`) with ``engine="numba"`` (:issue:`53731`)
+- Performance improvement for :class:`.DataFrameGroupBy`/:class:`.SeriesGroupBy` aggregations (e.g. :meth:`.DataFrameGroupBy.sum`) with ``engine="numba"`` (:issue:`53731`)
- Performance improvement in :class:`DataFrame` reductions with ``axis=1`` and extension dtypes (:issue:`54341`)
- Performance improvement in :class:`DataFrame` reductions with ``axis=None`` and extension dtypes (:issue:`54308`)
- Performance improvement in :class:`MultiIndex` and multi-column operations (e.g. :meth:`DataFrame.sort_values`, :meth:`DataFrame.groupby`, :meth:`Series.unstack`) when index/column values are already sorted (:issue:`53806`)
@@ -622,24 +616,24 @@ Performance improvements
- Performance improvement in :func:`concat` when the concatenation axis is a :class:`MultiIndex` (:issue:`53574`)
- Performance improvement in :func:`merge` for PyArrow backed strings (:issue:`54443`)
- Performance improvement in :func:`read_csv` with ``engine="c"`` (:issue:`52632`)
+- Performance improvement in :meth:`.ArrowExtensionArray.to_numpy` (:issue:`52525`)
- Performance improvement in :meth:`.DataFrameGroupBy.groups` (:issue:`53088`)
- Performance improvement in :meth:`DataFrame.astype` when ``dtype`` is an extension dtype (:issue:`54299`)
- Performance improvement in :meth:`DataFrame.isin` for extension dtypes (:issue:`53514`)
- Performance improvement in :meth:`DataFrame.loc` when selecting rows and columns (:issue:`53014`)
+- Performance improvement in :meth:`DataFrame.transpose` when transposing a DataFrame with a single PyArrow dtype (:issue:`54224`)
- Performance improvement in :meth:`DataFrame.transpose` when transposing a DataFrame with a single masked dtype, e.g. :class:`Int64` (:issue:`52836`)
-- Performance improvement in :meth:`DataFrame.transpose` when transposing a DataFrame with a single pyarrow dtype (:issue:`54224`)
-- Performance improvement in :meth:`Series.add` for pyarrow string and binary dtypes (:issue:`53150`)
+- Performance improvement in :meth:`Series.add` for PyArrow string and binary dtypes (:issue:`53150`)
- Performance improvement in :meth:`Series.corr` and :meth:`Series.cov` for extension dtypes (:issue:`52502`)
-- Performance improvement in :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, :meth:`DataFrame.bfill` with pyarrow dtypes (:issue:`53950`)
-- Performance improvement in :meth:`Series.str.get_dummies` for pyarrow-backed strings (:issue:`53655`)
-- Performance improvement in :meth:`Series.str.get` for pyarrow-backed strings (:issue:`53152`)
-- Performance improvement in :meth:`Series.str.split` with ``expand=True`` for pyarrow-backed strings (:issue:`53585`)
-- Performance improvement in :meth:`Series.to_numpy` when dtype is a numpy float dtype and ``na_value`` is ``np.nan`` (:issue:`52430`)
-- Performance improvement in :meth:`~arrays.ArrowExtensionArray.astype` when converting from a pyarrow timestamp or duration dtype to numpy (:issue:`53326`)
-- Performance improvement in :meth:`~arrays.ArrowExtensionArray.to_numpy` (:issue:`52525`)
+- Performance improvement in :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, :meth:`DataFrame.bfill` with PyArrow dtypes (:issue:`53950`)
+- Performance improvement in :meth:`Series.str.get_dummies` for PyArrow-backed strings (:issue:`53655`)
+- Performance improvement in :meth:`Series.str.get` for PyArrow-backed strings (:issue:`53152`)
+- Performance improvement in :meth:`Series.str.split` with ``expand=True`` for PyArrow-backed strings (:issue:`53585`)
+- Performance improvement in :meth:`Series.to_numpy` when dtype is a NumPy float dtype and ``na_value`` is ``np.nan`` (:issue:`52430`)
+- Performance improvement in :meth:`~arrays.ArrowExtensionArray.astype` when converting from a PyArrow timestamp or duration dtype to NumPy (:issue:`53326`)
- Performance improvement in various :class:`MultiIndex` set and indexing operations (:issue:`53955`)
-- Performance improvement when doing various reshaping operations on :class:`arrays.IntegerArrays` & :class:`arrays.FloatingArray` by avoiding doing unnecessary validation (:issue:`53013`)
-- Performance improvement when indexing with pyarrow timestamp and duration dtypes (:issue:`53368`)
+- Performance improvement when doing various reshaping operations on :class:`arrays.IntegerArray` & :class:`arrays.FloatingArray` by avoiding doing unnecessary validation (:issue:`53013`)
+- Performance improvement when indexing with PyArrow timestamp and duration dtypes (:issue:`53368`)
- Performance improvement when passing an array to :meth:`RangeIndex.take`, :meth:`DataFrame.loc`, or :meth:`DataFrame.iloc` and the DataFrame is using a RangeIndex (:issue:`53387`)
.. ---------------------------------------------------------------------------
@@ -656,15 +650,15 @@ Categorical
Datetimelike
^^^^^^^^^^^^
-- :meth:`DatetimeIndex.map` with ``na_action="ignore"`` now works as expected. (:issue:`51644`)
-- :meth:`DatetimeIndex.slice_indexer` now raises ``KeyError`` for non-monotonic indexes if either of the slice bounds is not in the index, this behaviour was previously deprecated but inconsistently handled. (:issue:`53983`)
+- :meth:`DatetimeIndex.map` with ``na_action="ignore"`` now works as expected (:issue:`51644`)
+- :meth:`DatetimeIndex.slice_indexer` now raises ``KeyError`` for non-monotonic indexes if either of the slice bounds is not in the index; this behaviour was previously deprecated but inconsistently handled (:issue:`53983`)
- Bug in :class:`DateOffset` which had inconsistent behavior when multiplying a :class:`DateOffset` object by a constant (:issue:`47953`)
- Bug in :func:`date_range` when ``freq`` was a :class:`DateOffset` with ``nanoseconds`` (:issue:`46877`)
-- Bug in :func:`to_datetime` converting :class:`Series` or :class:`DataFrame` containing :class:`arrays.ArrowExtensionArray` of ``pyarrow`` timestamps to numpy datetimes (:issue:`52545`)
-- Bug in :meth:`DataFrame.to_sql` raising ``ValueError`` for pyarrow-backed date like dtypes (:issue:`53854`)
+- Bug in :func:`to_datetime` converting :class:`Series` or :class:`DataFrame` containing :class:`arrays.ArrowExtensionArray` of PyArrow timestamps to numpy datetimes (:issue:`52545`)
+- Bug in :meth:`.DatetimeArray.map` and :meth:`DatetimeIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
+- Bug in :meth:`DataFrame.to_sql` raising ``ValueError`` for PyArrow-backed date like dtypes (:issue:`53854`)
- Bug in :meth:`Timestamp.date`, :meth:`Timestamp.isocalendar`, :meth:`Timestamp.timetuple`, and :meth:`Timestamp.toordinal` were returning incorrect results for inputs outside those supported by the Python standard library's datetime module (:issue:`53668`)
- Bug in :meth:`Timestamp.round` with values close to the implementation bounds returning incorrect results instead of raising ``OutOfBoundsDatetime`` (:issue:`51494`)
-- Bug in :meth:`arrays.DatetimeArray.map` and :meth:`DatetimeIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
- Bug in constructing a :class:`Series` or :class:`DataFrame` from a datetime or timedelta scalar always inferring nanosecond resolution instead of inferring from the input (:issue:`52212`)
- Bug in constructing a :class:`Timestamp` from a string representing a time without a date inferring an incorrect unit (:issue:`54097`)
- Bug in constructing a :class:`Timestamp` with ``ts_input=pd.NA`` raising ``TypeError`` (:issue:`45481`)
@@ -672,12 +666,12 @@ Datetimelike
Timedelta
^^^^^^^^^
-- :meth:`TimedeltaIndex.map` with ``na_action="ignore"`` now works as expected (:issue:`51644`)
- Bug in :class:`TimedeltaIndex` division or multiplication leading to ``.freq`` of "0 Days" instead of ``None`` (:issue:`51575`)
-- Bug in :class:`Timedelta` with Numpy timedelta64 objects not properly raising ``ValueError`` (:issue:`52806`)
-- Bug in :func:`to_timedelta` converting :class:`Series` or :class:`DataFrame` containing :class:`ArrowDtype` of ``pyarrow.duration`` to numpy ``timedelta64`` (:issue:`54298`)
+- Bug in :class:`Timedelta` with NumPy ``timedelta64`` objects not properly raising ``ValueError`` (:issue:`52806`)
+- Bug in :func:`to_timedelta` converting :class:`Series` or :class:`DataFrame` containing :class:`ArrowDtype` of ``pyarrow.duration`` to NumPy ``timedelta64`` (:issue:`54298`)
- Bug in :meth:`Timedelta.__hash__`, raising an ``OutOfBoundsTimedelta`` on certain large values of second resolution (:issue:`54037`)
- Bug in :meth:`Timedelta.round` with values close to the implementation bounds returning incorrect results instead of raising ``OutOfBoundsTimedelta`` (:issue:`51494`)
+- Bug in :meth:`TimedeltaIndex.map` with ``na_action="ignore"`` (:issue:`51644`)
- Bug in :meth:`arrays.TimedeltaArray.map` and :meth:`TimedeltaIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
Timezones
@@ -689,10 +683,10 @@ Numeric
^^^^^^^
- Bug in :class:`RangeIndex` setting ``step`` incorrectly when being the subtrahend with minuend a numeric value (:issue:`53255`)
- Bug in :meth:`Series.corr` and :meth:`Series.cov` raising ``AttributeError`` for masked dtypes (:issue:`51422`)
-- Bug when calling :meth:`Series.kurt` and :meth:`Series.skew` on numpy data of all zero returning a python type instead of a numpy type (:issue:`53482`)
+- Bug when calling :meth:`Series.kurt` and :meth:`Series.skew` on NumPy data of all zero returning a Python type instead of a NumPy type (:issue:`53482`)
- Bug in :meth:`Series.mean`, :meth:`DataFrame.mean` with object-dtype values containing strings that can be converted to numbers (e.g. "2") returning incorrect numeric results; these now raise ``TypeError`` (:issue:`36703`, :issue:`44008`)
-- Bug in :meth:`DataFrame.corrwith` raising ``NotImplementedError`` for pyarrow-backed dtypes (:issue:`52314`)
-- Bug in :meth:`DataFrame.size` and :meth:`Series.size` returning 64-bit integer instead of int (:issue:`52897`)
+- Bug in :meth:`DataFrame.corrwith` raising ``NotImplementedError`` for PyArrow-backed dtypes (:issue:`52314`)
+- Bug in :meth:`DataFrame.size` and :meth:`Series.size` returning 64-bit integer instead of a Python int (:issue:`52897`)
- Bug in :meth:`DateFrame.dot` returning ``object`` dtype for :class:`ArrowDtype` data (:issue:`53979`)
- Bug in :meth:`Series.any`, :meth:`Series.all`, :meth:`DataFrame.any`, and :meth:`DataFrame.all` had the default value of ``bool_only`` set to ``None`` instead of ``False``; this change should have no impact on users (:issue:`53258`)
- Bug in :meth:`Series.corr` and :meth:`Series.cov` raising ``AttributeError`` for masked dtypes (:issue:`51422`)
@@ -703,9 +697,9 @@ Numeric
Conversion
^^^^^^^^^^
- Bug in :func:`DataFrame.style.to_latex` and :func:`DataFrame.style.to_html` if the DataFrame contains integers with more digits than can be represented by floating point double precision (:issue:`52272`)
-- Bug in :func:`array` when given a ``datetime64`` or ``timedelta64`` dtype with unit of "s", "us", or "ms" returning :class:`NumpyExtensionArray` instead of :class:`DatetimeArray` or :class:`TimedeltaArray` (:issue:`52859`)
-- Bug in :func:`array` when given an empty list and no dtype returning :class:`NumpyExtensionArray` instead of :class:`FloatingArray` (:issue:`54371`)
-- Bug in :meth:`ArrowDtype.numpy_dtype` returning nanosecond units for non-nanosecond ``pyarrow.timestamp`` and ``pyarrow.duration`` types (:issue:`51800`)
+- Bug in :func:`array` when given a ``datetime64`` or ``timedelta64`` dtype with unit of "s", "us", or "ms" returning :class:`.NumpyExtensionArray` instead of :class:`.DatetimeArray` or :class:`.TimedeltaArray` (:issue:`52859`)
+- Bug in :func:`array` when given an empty list and no dtype returning :class:`.NumpyExtensionArray` instead of :class:`.FloatingArray` (:issue:`54371`)
+- Bug in :meth:`.ArrowDtype.numpy_dtype` returning nanosecond units for non-nanosecond ``pyarrow.timestamp`` and ``pyarrow.duration`` types (:issue:`51800`)
- Bug in :meth:`DataFrame.__repr__` incorrectly raising a ``TypeError`` when the dtype of a column is ``np.record`` (:issue:`48526`)
- Bug in :meth:`DataFrame.info` raising ``ValueError`` when ``use_numba`` is set (:issue:`51922`)
- Bug in :meth:`DataFrame.insert` raising ``TypeError`` if ``loc`` is ``np.int64`` (:issue:`53193`)
@@ -730,10 +724,10 @@ Indexing
Missing
^^^^^^^
-- Bug in :meth:`DataFrame.interpolate` failing to fill across multiblock data when ``method`` is "pad", "ffill", "bfill", or "backfill" (:issue:`53898`)
+- Bug in :meth:`DataFrame.interpolate` failing to fill across data when ``method`` is ``"pad"``, ``"ffill"``, ``"bfill"``, or ``"backfill"`` (:issue:`53898`)
- Bug in :meth:`DataFrame.interpolate` ignoring ``inplace`` when :class:`DataFrame` is empty (:issue:`53199`)
- Bug in :meth:`Series.idxmin`, :meth:`Series.idxmax`, :meth:`DataFrame.idxmin`, :meth:`DataFrame.idxmax` with a :class:`DatetimeIndex` index containing ``NaT`` incorrectly returning ``NaN`` instead of ``NaT`` (:issue:`43587`)
-- Bug in :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` failing to raise on invalid ``downcast`` keyword, which can be only ``None`` or "infer" (:issue:`53103`)
+- Bug in :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` failing to raise on invalid ``downcast`` keyword, which can be only ``None`` or ``"infer"`` (:issue:`53103`)
- Bug in :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` with complex dtype incorrectly failing to fill ``NaN`` entries (:issue:`53635`)
MultiIndex
@@ -745,25 +739,23 @@ I/O
^^^
- :meth:`DataFrame.to_orc` now raising ``ValueError`` when non-default :class:`Index` is given (:issue:`51828`)
- :meth:`DataFrame.to_sql` now raising ``ValueError`` when the name param is left empty while using SQLAlchemy to connect (:issue:`52675`)
-- Added ``filters`` parameter to :func:`read_parquet` to filter out data, compatible with both ``engines`` (:issue:`53212`)
-- Bug in :func:`json_normalize`, fix json_normalize cannot parse metadata fields list type (:issue:`37782`)
+- Bug in :func:`json_normalize` could not parse metadata fields list type (:issue:`37782`)
- Bug in :func:`read_csv` where it would error when ``parse_dates`` was set to a list or dictionary with ``engine="pyarrow"`` (:issue:`47961`)
-- Bug in :func:`read_csv`, with ``engine="pyarrow"`` erroring when specifying a ``dtype`` with ``index_col`` (:issue:`53229`)
-- Bug in :func:`read_hdf` not properly closing store after a ``IndexError`` is raised (:issue:`52781`)
-- Bug in :func:`read_html`, style elements were read into DataFrames (:issue:`52197`)
-- Bug in :func:`read_html`, tail texts were removed together with elements containing ``display:none`` style (:issue:`51629`)
+- Bug in :func:`read_csv` with ``engine="pyarrow"`` raising when specifying a ``dtype`` with ``index_col`` (:issue:`53229`)
+- Bug in :func:`read_hdf` not properly closing store after an ``IndexError`` is raised (:issue:`52781`)
+- Bug in :func:`read_html` where style elements were read into DataFrames (:issue:`52197`)
+- Bug in :func:`read_html` where tail texts were removed together with elements containing ``display:none`` style (:issue:`51629`)
- Bug in :func:`read_sql_table` raising an exception when reading a view (:issue:`52969`)
- Bug in :func:`read_sql` when reading multiple timezone aware columns with the same column name (:issue:`44421`)
- Bug in :func:`read_xml` stripping whitespace in string data (:issue:`53811`)
- Bug in :meth:`DataFrame.to_html` where ``colspace`` was incorrectly applied in case of multi index columns (:issue:`53885`)
- Bug in :meth:`DataFrame.to_html` where conversion for an empty :class:`DataFrame` with complex dtype raised a ``ValueError`` (:issue:`54167`)
-- Bug in :meth:`DataFrame.to_json` where :class:`DateTimeArray`/:class:`DateTimeIndex` with non nanosecond precision could not be serialized correctly (:issue:`53686`)
+- Bug in :meth:`DataFrame.to_json` where :class:`.DateTimeArray`/:class:`.DateTimeIndex` with non nanosecond precision could not be serialized correctly (:issue:`53686`)
- Bug when writing and reading empty Stata dta files where dtype information was lost (:issue:`46240`)
- Bug where ``bz2`` was treated as a hard requirement (:issue:`53857`)
Period
^^^^^^
-- :meth:`PeriodIndex.map` with ``na_action="ignore"`` now works as expected (:issue:`51644`)
- Bug in :class:`PeriodDtype` constructor failing to raise ``TypeError`` when no argument is passed or when ``None`` is passed (:issue:`27388`)
- Bug in :class:`PeriodDtype` constructor incorrectly returning the same ``normalize`` for different :class:`DateOffset` ``freq`` inputs (:issue:`24121`)
- Bug in :class:`PeriodDtype` constructor raising ``ValueError`` instead of ``TypeError`` when an invalid type is passed (:issue:`51790`)
@@ -771,6 +763,7 @@ Period
- Bug in :func:`read_csv` not processing empty strings as a null value, with ``engine="pyarrow"`` (:issue:`52087`)
- Bug in :func:`read_csv` returning ``object`` dtype columns instead of ``float64`` dtype columns with ``engine="pyarrow"`` for columns that are all null with ``engine="pyarrow"`` (:issue:`52087`)
- Bug in :meth:`Period.now` not accepting the ``freq`` parameter as a keyword argument (:issue:`53369`)
+- Bug in :meth:`PeriodIndex.map` with ``na_action="ignore"`` (:issue:`51644`)
- Bug in :meth:`arrays.PeriodArray.map` and :meth:`PeriodIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
- Bug in incorrectly allowing construction of :class:`Period` or :class:`PeriodDtype` with :class:`CustomBusinessDay` freq; use :class:`BusinessDay` instead (:issue:`52534`)
@@ -781,29 +774,29 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
-- Bug in :meth:`.DataFrameGroupBy.idxmin`, :meth:`.SeriesGroupBy.idxmin`, :meth:`.DataFrameGroupBy.idxmax`, :meth:`.SeriesGroupBy.idxmax` return wrong dtype when used on empty DataFrameGroupBy or SeriesGroupBy (:issue:`51423`)
-- Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` :class:`Datetimelike` ``origin`` has no effect in resample when values are outside of axis (:issue:`53662`)
+- Bug in :meth:`.DataFrameGroupBy.idxmin`, :meth:`.SeriesGroupBy.idxmin`, :meth:`.DataFrameGroupBy.idxmax`, :meth:`.SeriesGroupBy.idxmax` returns wrong dtype when used on an empty DataFrameGroupBy or SeriesGroupBy (:issue:`51423`)
- Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` in incorrectly allowing non-fixed ``freq`` when resampling on a :class:`TimedeltaIndex` (:issue:`51896`)
- Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` losing time zone when resampling empty data (:issue:`53664`)
+- Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` where ``origin`` has no effect in resample when values are outside of axis (:issue:`53662`)
- Bug in weighted rolling aggregations when specifying ``min_periods=0`` (:issue:`51449`)
-- Bug in :meth:`DataFrame.groupby` and :meth:`Series.groupby`, where, when the index of the
+- Bug in :meth:`DataFrame.groupby` and :meth:`Series.groupby` where, when the index of the
grouped :class:`Series` or :class:`DataFrame` was a :class:`DatetimeIndex`, :class:`TimedeltaIndex`
or :class:`PeriodIndex`, and the ``groupby`` method was given a function as its first argument,
- the function operated on the whole index rather than each element of the index. (:issue:`51979`)
+ the function operated on the whole index rather than each element of the index (:issue:`51979`)
- Bug in :meth:`.DataFrameGroupBy.agg` with lists not respecting ``as_index=False`` (:issue:`52849`)
-- Bug in :meth:`.DataFrameGroupBy.apply` causing an error to be raised when the input :class:`DataFrame` was subset as a :class:`DataFrame` after groupby (``[['a']]`` and not ``['a']``) and the given callable returned :class:`Series` that were not all indexed the same. (:issue:`52444`)
+- Bug in :meth:`.DataFrameGroupBy.apply` causing an error to be raised when the input :class:`DataFrame` was subset as a :class:`DataFrame` after groupby (``[['a']]`` and not ``['a']``) and the given callable returned :class:`Series` that were not all indexed the same (:issue:`52444`)
- Bug in :meth:`.DataFrameGroupBy.apply` raising a ``TypeError`` when selecting multiple columns and providing a function that returns ``np.ndarray`` results (:issue:`18930`)
-- Bug in :meth:`.GroupBy.groups` with a datetime key in conjunction with another key produced incorrect number of group keys (:issue:`51158`)
-- Bug in :meth:`.GroupBy.quantile` may implicitly sort the result index with ``sort=False`` (:issue:`53009`)
+- Bug in :meth:`.DataFrameGroupBy.groups` and :meth:`.SeriesGroupBy.groups` with a datetime key in conjunction with another key produced an incorrect number of group keys (:issue:`51158`)
+- Bug in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` may implicitly sort the result index with ``sort=False`` (:issue:`53009`)
- Bug in :meth:`.SeriesGroupBy.size` where the dtype would be ``np.int64`` for data with :class:`ArrowDtype` or masked dtypes (e.g. ``Int64``) (:issue:`53831`)
-- Bug in :meth:`DataFrame.groupby` with column selection on the resulting groupby object not returning names as tuples when grouping by a list of a single element. (:issue:`53500`)
-- Bug in :meth:`.GroupBy.var` failing to raise ``TypeError`` when called with datetime64, timedelta64 or :class:`PeriodDtype` values (:issue:`52128`, :issue:`53045`)
-- Bug in :meth:`.DataFrameGroupby.resample` with ``kind="period"`` raising ``AttributeError`` (:issue:`24103`)
+- Bug in :meth:`DataFrame.groupby` with column selection on the resulting groupby object not returning names as tuples when grouping by a list consisting of a single element (:issue:`53500`)
+- Bug in :meth:`.DataFrameGroupBy.var` and :meth:`.SeriesGroupBy.var` failing to raise ``TypeError`` when called with datetime64, timedelta64 or :class:`PeriodDtype` values (:issue:`52128`, :issue:`53045`)
+- Bug in :meth:`.DataFrameGroupBy.resample` with ``kind="period"`` raising ``AttributeError`` (:issue:`24103`)
- Bug in :meth:`.Resampler.ohlc` with empty object returning a :class:`Series` instead of empty :class:`DataFrame` (:issue:`42902`)
- Bug in :meth:`.SeriesGroupBy.count` and :meth:`.DataFrameGroupBy.count` where the dtype would be ``np.int64`` for data with :class:`ArrowDtype` or masked dtypes (e.g. ``Int64``) (:issue:`53831`)
- Bug in :meth:`.SeriesGroupBy.nth` and :meth:`.DataFrameGroupBy.nth` after performing column selection when using ``dropna="any"`` or ``dropna="all"`` would not subset columns (:issue:`53518`)
- Bug in :meth:`.SeriesGroupBy.nth` and :meth:`.DataFrameGroupBy.nth` raised after performing column selection when using ``dropna="any"`` or ``dropna="all"`` resulted in rows being dropped (:issue:`53518`)
-- Bug in :meth:`.SeriesGroupBy.sum` and :meth:`.DataFrameGroupby.sum` summing ``np.inf + np.inf`` and ``(-np.inf) + (-np.inf)`` to ``np.nan`` (:issue:`53606`)
+- Bug in :meth:`.SeriesGroupBy.sum` and :meth:`.DataFrameGroupBy.sum` summing ``np.inf + np.inf`` and ``(-np.inf) + (-np.inf)`` to ``np.nan`` instead of ``np.inf`` and ``-np.inf`` respectively (:issue:`53606`)
- Bug in :meth:`Series.groupby` raising an error when grouped :class:`Series` has a :class:`DatetimeIndex` index and a :class:`Series` with a name that is a month is given to the ``by`` argument (:issue:`48509`)
Reshaping
@@ -826,23 +819,23 @@ Reshaping
Sparse
^^^^^^
-- Bug in :class:`SparseDtype` constructor failing to raise ``TypeError`` when given an incompatible ``dtype`` for its subtype, which must be a ``numpy`` dtype (:issue:`53160`)
+- Bug in :class:`SparseDtype` constructor failing to raise ``TypeError`` when given an incompatible ``dtype`` for its subtype, which must be a NumPy dtype (:issue:`53160`)
- Bug in :meth:`arrays.SparseArray.map` allowed the fill value to be included in the sparse values (:issue:`52095`)
ExtensionArray
^^^^^^^^^^^^^^
-- Bug in :class:`ArrowStringArray` constructor raises ``ValueError`` with dictionary types of strings (:issue:`54074`)
+- Bug in :class:`.ArrowStringArray` constructor raises ``ValueError`` with dictionary types of strings (:issue:`54074`)
- Bug in :class:`DataFrame` constructor not copying :class:`Series` with extension dtype when given in dict (:issue:`53744`)
- Bug in :class:`~arrays.ArrowExtensionArray` converting pandas non-nanosecond temporal objects from non-zero values to zero values (:issue:`53171`)
-- Bug in :meth:`Series.quantile` for pyarrow temporal types raising ArrowInvalid (:issue:`52678`)
+- Bug in :meth:`Series.quantile` for PyArrow temporal types raising ``ArrowInvalid`` (:issue:`52678`)
- Bug in :meth:`Series.rank` returning wrong order for small values with ``Float64`` dtype (:issue:`52471`)
- Bug in :meth:`~arrays.ArrowExtensionArray.__iter__` and :meth:`~arrays.ArrowExtensionArray.__getitem__` returning python datetime and timedelta objects for non-nano dtypes (:issue:`53326`)
-- Bug where the :class:`DataFrame` repr would not work when a column would have an :class:`ArrowDtype` with an ``pyarrow.ExtensionDtype`` (:issue:`54063`)
-- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes(e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept pyarrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
+- Bug where the :class:`DataFrame` repr would not work when a column had an :class:`ArrowDtype` with a ``pyarrow.ExtensionDtype`` (:issue:`54063`)
+- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes (e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept PyArrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
Styler
^^^^^^
-- Bug in :meth:`Styler._copy` calling overridden methods in subclasses of :class:`Styler` (:issue:`52728`)
+- Bug in :meth:`.Styler._copy` calling overridden methods in subclasses of :class:`.Styler` (:issue:`52728`)
Metadata
^^^^^^^^
@@ -852,21 +845,21 @@ Metadata
Other
^^^^^
+- Bug in :class:`.FloatingArray.__contains__` with ``NaN`` item incorrectly returning ``False`` when ``NaN`` values are present (:issue:`52840`)
- Bug in :class:`DataFrame` and :class:`Series` raising for data of complex dtype when ``NaN`` values are present (:issue:`53627`)
- Bug in :class:`DatetimeIndex` where ``repr`` of index passed with time does not print time is midnight and non-day based freq(:issue:`53470`)
-- Bug in :class:`FloatingArray.__contains__` with ``NaN`` item incorrectly returning ``False`` when ``NaN`` values are present (:issue:`52840`)
-- Bug in :func:`.testing.assert_almost_equal` now throwing assertion error for two unequal sets (:issue:`51727`)
+- Bug in :func:`.testing.assert_frame_equal` and :func:`.testing.assert_series_equal` now throw assertion error for two unequal sets (:issue:`51727`)
- Bug in :func:`.testing.assert_frame_equal` checks category dtypes even when asked not to check index type (:issue:`52126`)
- Bug in :func:`api.interchange.from_dataframe` was not respecting ``allow_copy`` argument (:issue:`54322`)
- Bug in :func:`api.interchange.from_dataframe` was raising during interchanging from non-pandas tz-aware data containing null values (:issue:`54287`)
- Bug in :func:`api.interchange.from_dataframe` when converting an empty DataFrame object (:issue:`53155`)
- Bug in :func:`from_dummies` where the resulting :class:`Index` did not match the original :class:`Index` (:issue:`54300`)
- Bug in :func:`from_dummies` where the resulting data would always be ``object`` dtype instead of the dtype of the columns (:issue:`54300`)
+- Bug in :meth:`.DataFrameGroupBy.first`, :meth:`.DataFrameGroupBy.last`, :meth:`.SeriesGroupBy.first`, and :meth:`.SeriesGroupBy.last` where an empty group would return ``np.nan`` instead of the corresponding :class:`.ExtensionArray` NA value (:issue:`39098`)
- Bug in :meth:`DataFrame.pivot_table` with casting the mean of ints back to an int (:issue:`16676`)
- Bug in :meth:`DataFrame.reindex` with a ``fill_value`` that should be inferred with a :class:`ExtensionDtype` incorrectly inferring ``object`` dtype (:issue:`52586`)
-- Bug in :meth:`DataFrame.shift` and :meth:`Series.shift` and :meth:`DataFrameGroupBy.shift` when passing both "freq" and "fill_value" silently ignoring "fill_value" instead of raising ``ValueError`` (:issue:`53832`)
+- Bug in :meth:`DataFrame.shift` and :meth:`Series.shift` and :meth:`.DataFrameGroupBy.shift` when passing both ``freq`` and ``fill_value`` silently ignoring ``fill_value`` instead of raising ``ValueError`` (:issue:`53832`)
- Bug in :meth:`DataFrame.shift` with ``axis=1`` on a :class:`DataFrame` with a single :class:`ExtensionDtype` column giving incorrect results (:issue:`53832`)
-- Bug in :meth:`GroupBy.first` and :meth:`GroupBy.last` where an empty group would return ``np.nan`` instead of a an ExtensionArray's NA value (:issue:`39098`)
- Bug in :meth:`Index.sort_values` when a ``key`` is passed (:issue:`52764`)
- Bug in :meth:`Series.align`, :meth:`DataFrame.align`, :meth:`Series.reindex`, :meth:`DataFrame.reindex`, :meth:`Series.interpolate`, :meth:`DataFrame.interpolate`, incorrectly failing to raise with method="asfreq" (:issue:`53620`)
- Bug in :meth:`Series.argsort` failing to raise when an invalid ``axis`` is passed (:issue:`54257`)
| Backport PR #54545: DOC: whatsnew 2.1.0 refinements | https://api.github.com/repos/pandas-dev/pandas/pulls/54588 | 2023-08-16T21:57:58Z | 2023-08-16T23:24:53Z | 2023-08-16T23:24:52Z | 2023-08-16T23:24:53Z |
CI: Enable MacOS Python Dev tests | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 66d8320206429..030c9546fecca 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -77,16 +77,12 @@ jobs:
env_file: actions-311-numpydev.yaml
pattern: "not slow and not network and not single_cpu"
test_args: "-W error::DeprecationWarning -W error::FutureWarning"
- # TODO(cython3): Re-enable once next-beta(after beta 1) comes out
- # There are some warnings failing the build with -werror
- pandas_ci: "0"
- name: "Pyarrow Nightly"
env_file: actions-311-pyarrownightly.yaml
pattern: "not slow and not network and not single_cpu"
fail-fast: false
name: ${{ matrix.name || format('ubuntu-latest {0}', matrix.env_file) }}
env:
- ENV_FILE: ci/deps/${{ matrix.env_file }}
PATTERN: ${{ matrix.pattern }}
EXTRA_APT: ${{ matrix.extra_apt || '' }}
LANG: ${{ matrix.lang || 'C.UTF-8' }}
@@ -150,14 +146,13 @@ jobs:
- name: Generate extra locales
# These extra locales will be available for locale.setlocale() calls in tests
- run: |
- sudo locale-gen ${{ matrix.extra_loc }}
+ run: sudo locale-gen ${{ matrix.extra_loc }}
if: ${{ matrix.extra_loc }}
- name: Set up Conda
uses: ./.github/actions/setup-conda
with:
- environment-file: ${{ env.ENV_FILE }}
+ environment-file: ci/deps/${{ matrix.env_file }}
- name: Build Pandas
id: build
@@ -312,15 +307,14 @@ jobs:
# to the corresponding posix/windows-macos/sdist etc. workflows.
# Feel free to modify this comment as necessary.
#if: false # Uncomment this to freeze the workflow, comment it to unfreeze
+ defaults:
+ run:
+ shell: bash -eou pipefail {0}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
- # TODO: Disable macOS for now, Github Actions bug where python is not
- # symlinked correctly to 3.12
- # xref https://github.com/actions/setup-python/issues/701
- #os: [ubuntu-22.04, macOS-latest, windows-latest]
- os: [ubuntu-22.04, windows-latest]
+ os: [ubuntu-22.04, macOS-latest, windows-latest]
timeout-minutes: 180
@@ -345,22 +339,15 @@ jobs:
with:
python-version: '3.12-dev'
- - name: Install dependencies
+ - name: Build Environment
run: |
python --version
python -m pip install --upgrade pip setuptools wheel meson[ninja]==1.0.1 meson-python==0.13.1
python -m pip install --pre --extra-index-url https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy
python -m pip install versioneer[toml]
python -m pip install python-dateutil pytz tzdata cython hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-cov pytest-asyncio>=0.17
- python -m pip list
-
- - name: Build Pandas
- run: |
python -m pip install -ve . --no-build-isolation --no-index
+ python -m pip list
- - name: Build Version
- run: |
- python -c "import pandas; pandas.show_versions();"
-
- - name: Test
+ - name: Run Tests
uses: ./.github/actions/run-tests
| Made some other small cleanups too | https://api.github.com/repos/pandas-dev/pandas/pulls/54587 | 2023-08-16T21:36:52Z | 2023-08-18T07:30:39Z | 2023-08-18T07:30:39Z | 2023-08-18T16:31:20Z |
REF: Refactor conversion of na value | diff --git a/pandas/tests/strings/__init__.py b/pandas/tests/strings/__init__.py
index 9a7622b4f1cd8..496a2d095d85b 100644
--- a/pandas/tests/strings/__init__.py
+++ b/pandas/tests/strings/__init__.py
@@ -1,2 +1,12 @@
# Needed for new arrow string dtype
+
+import pandas as pd
+
object_pyarrow_numpy = ("object",)
+
+
+def _convert_na_value(ser, expected):
+ if ser.dtype != object:
+ # GH#18463
+ expected = expected.fillna(pd.NA)
+ return expected
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index bcb8db96b37fa..d5017b1c47d85 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -11,7 +11,10 @@
Series,
_testing as tm,
)
-from pandas.tests.strings import object_pyarrow_numpy
+from pandas.tests.strings import (
+ _convert_na_value,
+ object_pyarrow_numpy,
+)
# --------------------------------------------------------------------------------------
# str.contains
@@ -758,9 +761,7 @@ def test_findall(any_string_dtype):
ser = Series(["fooBAD__barBAD", np.nan, "foo", "BAD"], dtype=any_string_dtype)
result = ser.str.findall("BAD[_]*")
expected = Series([["BAD__", "BAD"], np.nan, [], ["BAD"]])
- if ser.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(ser, expected)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py
index 0298694ccaf71..7fabe238d2b86 100644
--- a/pandas/tests/strings/test_split_partition.py
+++ b/pandas/tests/strings/test_split_partition.py
@@ -12,6 +12,7 @@
Series,
_testing as tm,
)
+from pandas.tests.strings import _convert_na_value
@pytest.mark.parametrize("method", ["split", "rsplit"])
@@ -20,9 +21,7 @@ def test_split(any_string_dtype, method):
result = getattr(values.str, method)("_")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -32,9 +31,7 @@ def test_split_more_than_one_char(any_string_dtype, method):
values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"], dtype=any_string_dtype)
result = getattr(values.str, method)("__")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
result = getattr(values.str, method)("__", expand=False)
@@ -46,9 +43,7 @@ def test_split_more_regex_split(any_string_dtype):
values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype)
result = values.str.split("[,_]")
exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -128,9 +123,7 @@ def test_rsplit(any_string_dtype):
values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype)
result = values.str.rsplit("[,_]")
exp = Series([["a,b_c"], ["c_d,e"], np.nan, ["f,g,h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -139,9 +132,7 @@ def test_rsplit_max_number(any_string_dtype):
values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype)
result = values.str.rsplit("_", n=1)
exp = Series([["a_b", "c"], ["c_d", "e"], np.nan, ["f_g", "h"]])
- if values.dtype != object:
- # GH#18463
- exp = exp.fillna(pd.NA)
+ exp = _convert_na_value(values, exp)
tm.assert_series_equal(result, exp)
@@ -455,9 +446,7 @@ def test_partition_series_more_than_one_char(method, exp, any_string_dtype):
s = Series(["a__b__c", "c__d__e", np.nan, "f__g__h", None], dtype=any_string_dtype)
result = getattr(s.str, method)("__", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -480,9 +469,7 @@ def test_partition_series_none(any_string_dtype, method, exp):
s = Series(["a b c", "c d e", np.nan, "f g h", None], dtype=any_string_dtype)
result = getattr(s.str, method)(expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -505,9 +492,7 @@ def test_partition_series_not_split(any_string_dtype, method, exp):
s = Series(["abc", "cde", np.nan, "fgh", None], dtype=any_string_dtype)
result = getattr(s.str, method)("_", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -531,9 +516,7 @@ def test_partition_series_unicode(any_string_dtype, method, exp):
result = getattr(s.str, method)("_", expand=False)
expected = Series(exp)
- if s.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
| precursor for #54585 | https://api.github.com/repos/pandas-dev/pandas/pulls/54586 | 2023-08-16T21:16:29Z | 2023-08-21T09:17:33Z | 2023-08-21T09:17:33Z | 2023-08-21T09:17:36Z |
Use NaN as na_value for new pyarrow_numpy StringDtype | diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index f0e1d194cd88f..2394b9af2015e 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -101,10 +101,14 @@ class StringDtype(StorageExtensionDtype):
# base class "StorageExtensionDtype") with class variable
name: ClassVar[str] = "string" # type: ignore[misc]
- #: StringDtype().na_value uses pandas.NA
+ #: StringDtype().na_value uses pandas.NA except the implementation that
+ # follows NumPy semantics, which uses nan.
@property
- def na_value(self) -> libmissing.NAType:
- return libmissing.NA
+ def na_value(self) -> libmissing.NAType | float: # type: ignore[override]
+ if self.storage == "pyarrow_numpy":
+ return np.nan
+ else:
+ return libmissing.NA
_metadata = ("storage",)
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index b8f872529bc1a..24d8e43708b91 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -17,6 +17,13 @@
)
+def na_val(dtype):
+ if dtype.storage == "pyarrow_numpy":
+ return np.nan
+ else:
+ return pd.NA
+
+
@pytest.fixture
def dtype(string_storage):
"""Fixture giving StringDtype from parametrized 'string_storage'"""
@@ -31,26 +38,34 @@ def cls(dtype):
def test_repr(dtype):
df = pd.DataFrame({"A": pd.array(["a", pd.NA, "b"], dtype=dtype)})
- expected = " A\n0 a\n1 <NA>\n2 b"
+ if dtype.storage == "pyarrow_numpy":
+ expected = " A\n0 a\n1 NaN\n2 b"
+ else:
+ expected = " A\n0 a\n1 <NA>\n2 b"
assert repr(df) == expected
- expected = "0 a\n1 <NA>\n2 b\nName: A, dtype: string"
+ if dtype.storage == "pyarrow_numpy":
+ expected = "0 a\n1 NaN\n2 b\nName: A, dtype: string"
+ else:
+ expected = "0 a\n1 <NA>\n2 b\nName: A, dtype: string"
assert repr(df.A) == expected
if dtype.storage == "pyarrow":
arr_name = "ArrowStringArray"
+ expected = f"<{arr_name}>\n['a', <NA>, 'b']\nLength: 3, dtype: string"
elif dtype.storage == "pyarrow_numpy":
arr_name = "ArrowStringArrayNumpySemantics"
+ expected = f"<{arr_name}>\n['a', nan, 'b']\nLength: 3, dtype: string"
else:
arr_name = "StringArray"
- expected = f"<{arr_name}>\n['a', <NA>, 'b']\nLength: 3, dtype: string"
+ expected = f"<{arr_name}>\n['a', <NA>, 'b']\nLength: 3, dtype: string"
assert repr(df.A.array) == expected
def test_none_to_nan(cls):
a = cls._from_sequence(["a", None, "b"])
assert a[1] is not None
- assert a[1] is pd.NA
+ assert a[1] is na_val(a.dtype)
def test_setitem_validates(cls):
@@ -213,13 +228,9 @@ def test_comparison_methods_scalar(comparison_op, dtype):
other = "a"
result = getattr(a, op_name)(other)
if dtype.storage == "pyarrow_numpy":
- expected = np.array([getattr(item, op_name)(other) for item in a], dtype=object)
- expected = (
- pd.array(expected, dtype="boolean")
- .to_numpy(na_value=False)
- .astype(np.bool_)
- )
- tm.assert_numpy_array_equal(result, expected)
+ expected = np.array([getattr(item, op_name)(other) for item in a])
+ expected[1] = False
+ tm.assert_numpy_array_equal(result, expected.astype(np.bool_))
else:
expected_dtype = "boolean[pyarrow]" if dtype.storage == "pyarrow" else "boolean"
expected = np.array([getattr(item, op_name)(other) for item in a], dtype=object)
@@ -415,7 +426,7 @@ def test_min_max(method, skipna, dtype, request):
expected = "a" if method == "min" else "c"
assert result == expected
else:
- assert result is pd.NA
+ assert result is na_val(arr.dtype)
@pytest.mark.parametrize("method", ["min", "max"])
@@ -483,7 +494,7 @@ def test_arrow_roundtrip(dtype, string_storage2):
expected = df.astype(f"string[{string_storage2}]")
tm.assert_frame_equal(result, expected)
# ensure the missing value is represented by NA and not np.nan or None
- assert result.loc[2, "a"] is pd.NA
+ assert result.loc[2, "a"] is na_val(result["a"].dtype)
def test_arrow_load_from_zero_chunks(dtype, string_storage2):
@@ -581,7 +592,7 @@ def test_astype_from_float_dtype(float_dtype, dtype):
def test_to_numpy_returns_pdna_default(dtype):
arr = pd.array(["a", pd.NA, "b"], dtype=dtype)
result = np.array(arr)
- expected = np.array(["a", pd.NA, "b"], dtype=object)
+ expected = np.array(["a", na_val(dtype), "b"], dtype=object)
tm.assert_numpy_array_equal(result, expected)
@@ -621,7 +632,7 @@ def test_setitem_scalar_with_mask_validation(dtype):
mask = np.array([False, True, False])
ser[mask] = None
- assert ser.array[1] is pd.NA
+ assert ser.array[1] is na_val(ser.dtype)
# for other non-string we should also raise an error
ser = pd.Series(["a", "b", "c"], dtype=dtype)
diff --git a/pandas/tests/strings/__init__.py b/pandas/tests/strings/__init__.py
index bf119f2721ed4..01b49b5e5b633 100644
--- a/pandas/tests/strings/__init__.py
+++ b/pandas/tests/strings/__init__.py
@@ -1,4 +1,4 @@
-# Needed for new arrow string dtype
+import numpy as np
import pandas as pd
@@ -7,6 +7,9 @@
def _convert_na_value(ser, expected):
if ser.dtype != object:
- # GH#18463
- expected = expected.fillna(pd.NA)
+ if ser.dtype.storage == "pyarrow_numpy":
+ expected = expected.fillna(np.nan)
+ else:
+ # GH#18463
+ expected = expected.fillna(pd.NA)
return expected
diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py
index 7fabe238d2b86..0a7d409773dd6 100644
--- a/pandas/tests/strings/test_split_partition.py
+++ b/pandas/tests/strings/test_split_partition.py
@@ -12,7 +12,10 @@
Series,
_testing as tm,
)
-from pandas.tests.strings import _convert_na_value
+from pandas.tests.strings import (
+ _convert_na_value,
+ object_pyarrow_numpy,
+)
@pytest.mark.parametrize("method", ["split", "rsplit"])
@@ -113,8 +116,8 @@ def test_split_object_mixed(expand, method):
def test_split_n(any_string_dtype, method, n):
s = Series(["a b", pd.NA, "b c"], dtype=any_string_dtype)
expected = Series([["a", "b"], pd.NA, ["b", "c"]])
-
result = getattr(s.str, method)(" ", n=n)
+ expected = _convert_na_value(s, expected)
tm.assert_series_equal(result, expected)
@@ -381,7 +384,7 @@ def test_split_nan_expand(any_string_dtype):
# check that these are actually np.nan/pd.NA and not None
# TODO see GH 18463
# tm.assert_frame_equal does not differentiate
- if any_string_dtype == "object":
+ if any_string_dtype in object_pyarrow_numpy:
assert all(np.isnan(x) for x in result.iloc[1])
else:
assert all(x is pd.NA for x in result.iloc[1])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
xref https://github.com/pandas-dev/pandas/issues/54792 | https://api.github.com/repos/pandas-dev/pandas/pulls/54585 | 2023-08-16T21:11:15Z | 2023-08-26T10:40:39Z | 2023-08-26T10:40:38Z | 2023-08-28T09:47:24Z |
Backport PR #54579 on branch 2.1.x (ENH: Reflect changes from `numpy` namespace refactor Part 3) | diff --git a/asv_bench/benchmarks/algos/isin.py b/asv_bench/benchmarks/algos/isin.py
index ac79ab65cea81..92797425b2c30 100644
--- a/asv_bench/benchmarks/algos/isin.py
+++ b/asv_bench/benchmarks/algos/isin.py
@@ -247,7 +247,7 @@ def setup(self, series_type, vals_type):
elif series_type == "long":
ser_vals = np.arange(N_many)
elif series_type == "long_floats":
- ser_vals = np.arange(N_many, dtype=np.float_)
+ ser_vals = np.arange(N_many, dtype=np.float64)
self.series = Series(ser_vals).astype(object)
@@ -258,7 +258,7 @@ def setup(self, series_type, vals_type):
elif vals_type == "long":
values = np.arange(N_many)
elif vals_type == "long_floats":
- values = np.arange(N_many, dtype=np.float_)
+ values = np.arange(N_many, dtype=np.float64)
self.values = values.astype(object)
diff --git a/doc/source/getting_started/comparison/comparison_with_sql.rst b/doc/source/getting_started/comparison/comparison_with_sql.rst
index 7a83d50416186..f0eaa7362c52c 100644
--- a/doc/source/getting_started/comparison/comparison_with_sql.rst
+++ b/doc/source/getting_started/comparison/comparison_with_sql.rst
@@ -107,7 +107,7 @@ methods.
.. ipython:: python
frame = pd.DataFrame(
- {"col1": ["A", "B", np.NaN, "C", "D"], "col2": ["F", np.NaN, "G", "H", "I"]}
+ {"col1": ["A", "B", np.nan, "C", "D"], "col2": ["F", np.nan, "G", "H", "I"]}
)
frame
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index 2ddc3e709be85..bc2f4420da784 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -183,8 +183,8 @@ can be improved by passing an ``np.ndarray``.
...: return s * dx
...: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_b,
...: np.ndarray col_N):
- ...: assert (col_a.dtype == np.float_
- ...: and col_b.dtype == np.float_ and col_N.dtype == np.int_)
+ ...: assert (col_a.dtype == np.float64
+ ...: and col_b.dtype == np.float64 and col_N.dtype == np.int_)
...: cdef Py_ssize_t i, n = len(col_N)
...: assert (len(col_a) == len(col_b) == n)
...: cdef np.ndarray[double] res = np.empty(n)
diff --git a/doc/source/user_guide/gotchas.rst b/doc/source/user_guide/gotchas.rst
index 67106df328361..c00a236ff4e9d 100644
--- a/doc/source/user_guide/gotchas.rst
+++ b/doc/source/user_guide/gotchas.rst
@@ -327,7 +327,7 @@ present in the more domain-specific statistical programming language `R
``numpy.unsignedinteger`` | ``uint8, uint16, uint32, uint64``
``numpy.object_`` | ``object_``
``numpy.bool_`` | ``bool_``
- ``numpy.character`` | ``string_, unicode_``
+ ``numpy.character`` | ``bytes_, str_``
The R language, by contrast, only has a handful of built-in data types:
``integer``, ``numeric`` (floating-point), ``character``, and
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 6e352c52cd60e..df2f1bccc3cff 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -4881,7 +4881,7 @@ unspecified columns of the given DataFrame. The argument ``selector``
defines which table is the selector table (which you can make queries from).
The argument ``dropna`` will drop rows from the input ``DataFrame`` to ensure
tables are synchronized. This means that if a row for one of the tables
-being written to is entirely ``np.NaN``, that row will be dropped from all tables.
+being written to is entirely ``np.nan``, that row will be dropped from all tables.
If ``dropna`` is False, **THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES**.
Remember that entirely ``np.Nan`` rows are not written to the HDFStore, so if
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 73a523b14f9f7..38c6e1123aaae 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -556,7 +556,7 @@ You must pass in the ``line_terminator`` explicitly, even in this case.
.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
-Proper handling of ``np.NaN`` in a string data-typed column with the Python engine
+Proper handling of ``np.nan`` in a string data-typed column with the Python engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There was bug in :func:`read_excel` and :func:`read_csv` with the Python
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 0b6ea58f987d4..9eed70a23c9dd 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -59,7 +59,7 @@ from pandas._libs.util cimport get_nat
cdef:
float64_t FP_ERR = 1e-13
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
int64_t NPY_NAT = get_nat()
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 20499016f951e..7635b261d4149 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -52,7 +52,7 @@ from pandas._libs.missing cimport checknull
cdef int64_t NPY_NAT = util.get_nat()
-cdef float64_t NaN = <float64_t>np.NaN
+cdef float64_t NaN = <float64_t>np.nan
cdef enum InterpolationEnumType:
INTERPOLATION_LINEAR,
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index a96152ccdf3cc..2681115bbdcfb 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -144,7 +144,7 @@ cdef:
object oINT64_MIN = <int64_t>INT64_MIN
object oUINT64_MAX = <uint64_t>UINT64_MAX
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
# python-visible
i8max = <int64_t>INT64_MAX
diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd
index e25e7e8b94e1d..519d3fc939efa 100644
--- a/pandas/_libs/tslibs/util.pxd
+++ b/pandas/_libs/tslibs/util.pxd
@@ -75,7 +75,7 @@ cdef inline bint is_integer_object(object obj) noexcept nogil:
cdef inline bint is_float_object(object obj) noexcept nogil:
"""
- Cython equivalent of `isinstance(val, (float, np.float_))`
+ Cython equivalent of `isinstance(val, (float, np.float64))`
Parameters
----------
@@ -91,7 +91,7 @@ cdef inline bint is_float_object(object obj) noexcept nogil:
cdef inline bint is_complex_object(object obj) noexcept nogil:
"""
- Cython equivalent of `isinstance(val, (complex, np.complex_))`
+ Cython equivalent of `isinstance(val, (complex, np.complex128))`
Parameters
----------
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 425c5ade2e2d4..9c151b8269a52 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -57,7 +57,7 @@ cdef:
float32_t MAXfloat32 = np.inf
float64_t MAXfloat64 = np.inf
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
cdef bint is_monotonic_increasing_start_end_bounds(
ndarray[int64_t, ndim=1] start, ndarray[int64_t, ndim=1] end
diff --git a/pandas/conftest.py b/pandas/conftest.py
index f756da82157b8..757ca817d1b85 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -777,7 +777,7 @@ def series_with_multilevel_index() -> Series:
index = MultiIndex.from_tuples(tuples)
data = np.random.default_rng(2).standard_normal(8)
ser = Series(data, index=index)
- ser.iloc[3] = np.NaN
+ ser.iloc[3] = np.nan
return ser
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6107388bfe78b..aefc94ebd665c 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2109,7 +2109,7 @@ def _codes(self) -> np.ndarray:
def _box_func(self, i: int):
if i == -1:
- return np.NaN
+ return np.nan
return self.categories[i]
def _unbox_scalar(self, key) -> int:
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index 9050fb6e76b9c..852bfae1cc79a 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -537,8 +537,8 @@ def __init__(self, lhs, rhs) -> None:
)
# do not upcast float32s to float64 un-necessarily
- acceptable_dtypes = [np.float32, np.float_]
- _cast_inplace(com.flatten(self), acceptable_dtypes, np.float_)
+ acceptable_dtypes = [np.float32, np.float64]
+ _cast_inplace(com.flatten(self), acceptable_dtypes, np.float64)
UNARY_OPS_SYMS = ("+", "-", "~", "not")
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 9f7c0b3e36032..657cbce40087a 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -850,7 +850,7 @@ def infer_dtype_from_scalar(val) -> tuple[DtypeObj, Any]:
dtype = np.dtype(np.float64)
elif is_complex(val):
- dtype = np.dtype(np.complex_)
+ dtype = np.dtype(np.complex128)
if lib.is_period(val):
dtype = PeriodDtype(freq=val.freq)
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index a0feb49f47c4e..c2e498e75b7d3 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1351,7 +1351,7 @@ def is_complex_dtype(arr_or_dtype) -> bool:
False
>>> is_complex_dtype(int)
False
- >>> is_complex_dtype(np.complex_)
+ >>> is_complex_dtype(np.complex128)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index be0d046697ba9..954573febed41 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5307,7 +5307,7 @@ def reindex(
level : int or name
Broadcast across a level, matching Index values on the
passed MultiIndex level.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value.
limit : int, default None
@@ -7376,7 +7376,7 @@ def ffill(
2 3.0 4.0 NaN 1.0
3 3.0 3.0 NaN 4.0
- >>> ser = pd.Series([1, np.NaN, 2, 3])
+ >>> ser = pd.Series([1, np.nan, 2, 3])
>>> ser.ffill()
0 1.0
1 1.0
@@ -8375,7 +8375,7 @@ def isna(self) -> Self:
--------
Show which entries in a DataFrame are NA.
- >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
+ >>> df = pd.DataFrame(dict(age=[5, 6, np.nan],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
@@ -8394,7 +8394,7 @@ def isna(self) -> Self:
Show which entries in a Series are NA.
- >>> ser = pd.Series([5, 6, np.NaN])
+ >>> ser = pd.Series([5, 6, np.nan])
>>> ser
0 5.0
1 6.0
@@ -8442,7 +8442,7 @@ def notna(self) -> Self:
--------
Show which entries in a DataFrame are not NA.
- >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
+ >>> df = pd.DataFrame(dict(age=[5, 6, np.nan],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
@@ -8461,7 +8461,7 @@ def notna(self) -> Self:
Show which entries in a Series are not NA.
- >>> ser = pd.Series([5, 6, np.NaN])
+ >>> ser = pd.Series([5, 6, np.nan])
>>> ser
0 5.0
1 6.0
@@ -8628,7 +8628,7 @@ def clip(
Clips using specific lower threshold per column element, with missing values:
- >>> t = pd.Series([2, -4, np.NaN, 6, 3])
+ >>> t = pd.Series([2, -4, np.nan, 6, 3])
>>> t
0 2.0
1 -4.0
@@ -9828,7 +9828,7 @@ def align(
copy : bool, default True
Always returns new objects. If copy=False and no reindexing is
required then original objects are returned.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value.
method : {{'backfill', 'bfill', 'pad', 'ffill', None}}, default None
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index e327dd9d6c5ff..5a7f42a535951 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -5418,7 +5418,7 @@ def _mask_selected_obj(self, mask: npt.NDArray[np.bool_]) -> NDFrameT:
def _reindex_output(
self,
output: OutputFrameOrSeries,
- fill_value: Scalar = np.NaN,
+ fill_value: Scalar = np.nan,
qs: npt.NDArray[np.float64] | None = None,
) -> OutputFrameOrSeries:
"""
@@ -5436,7 +5436,7 @@ def _reindex_output(
----------
output : Series or DataFrame
Object resulting from grouping and applying an operation.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for unobserved categories if self.observed is False.
qs : np.ndarray[float64] or None, default None
quantile values, only relevant for quantile.
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 288fd35892fd0..241b2de513a04 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2848,7 +2848,7 @@ def isna(self) -> npt.NDArray[np.bool_]:
Show which entries in a pandas.Index are NA. The result is an
array.
- >>> idx = pd.Index([5.2, 6.0, np.NaN])
+ >>> idx = pd.Index([5.2, 6.0, np.nan])
>>> idx
Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
@@ -2904,7 +2904,7 @@ def notna(self) -> npt.NDArray[np.bool_]:
Show which entries in an Index are not NA. The result is an
array.
- >>> idx = pd.Index([5.2, 6.0, np.NaN])
+ >>> idx = pd.Index([5.2, 6.0, np.nan])
>>> idx
Index([5.2, 6.0, nan], dtype='float64')
>>> idx.notna()
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index f915c08bb8294..e8b3676e71ae0 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -124,7 +124,7 @@ def _get_next_label(label):
elif is_integer_dtype(dtype):
return label + 1
elif is_float_dtype(dtype):
- return np.nextafter(label, np.infty)
+ return np.nextafter(label, np.inf)
else:
raise TypeError(f"cannot determine next label for type {repr(type(label))}")
@@ -141,7 +141,7 @@ def _get_prev_label(label):
elif is_integer_dtype(dtype):
return label - 1
elif is_float_dtype(dtype):
- return np.nextafter(label, -np.infty)
+ return np.nextafter(label, -np.inf)
else:
raise TypeError(f"cannot determine next label for type {repr(type(label))}")
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 814a770b192bf..885675e5caa5a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5586,7 +5586,7 @@ def dropna(
Empty strings are not considered NA values. ``None`` is considered an
NA value.
- >>> ser = pd.Series([np.NaN, 2, pd.NaT, '', None, 'I stay'])
+ >>> ser = pd.Series([np.nan, 2, pd.NaT, '', None, 'I stay'])
>>> ser
0 NaN
1 2
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index e59369db776da..becf9b47b3af1 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1215,7 +1215,7 @@ def contains(
--------
Returning a Series of booleans using only a literal pattern.
- >>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
+ >>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.nan])
>>> s1.str.contains('og', regex=False)
0 False
1 True
@@ -1226,7 +1226,7 @@ def contains(
Returning an Index of booleans using only a literal pattern.
- >>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
+ >>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.nan])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
@@ -3500,7 +3500,7 @@ def str_extractall(arr, pat, flags: int = 0) -> DataFrame:
for match_i, match_tuple in enumerate(regex.findall(subject)):
if isinstance(match_tuple, str):
match_tuple = (match_tuple,)
- na_tuple = [np.NaN if group == "" else group for group in match_tuple]
+ na_tuple = [np.nan if group == "" else group for group in match_tuple]
match_list.append(na_tuple)
result_key = tuple(subject_key + (match_i,))
index_list.append(result_key)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 9fe8cbfa159c6..ff26abd5cc26c 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1715,7 +1715,7 @@ def format_percentiles(
"""
percentiles = np.asarray(percentiles)
- # It checks for np.NaN as well
+ # It checks for np.nan as well
if (
not is_numeric_dtype(percentiles)
or not np.all(percentiles >= 0)
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index ba052c6936dd9..3a3f73a68374b 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -637,15 +637,15 @@ def test_apply_with_byte_string():
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("val", ["asd", 12, None, np.NaN])
+@pytest.mark.parametrize("val", ["asd", 12, None, np.nan])
def test_apply_category_equalness(val):
# Check if categorical comparisons on apply, GH 21239
- df_values = ["asd", None, 12, "asd", "cde", np.NaN]
+ df_values = ["asd", None, 12, "asd", "cde", np.nan]
df = DataFrame({"a": df_values}, dtype="category")
result = df.a.apply(lambda x: x == val)
expected = Series(
- [np.NaN if pd.isnull(x) else x == val for x in df_values], name="a"
+ [np.nan if pd.isnull(x) else x == val for x in df_values], name="a"
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py
index aea1e03dfe0ee..d3e5ac1b4ca7a 100644
--- a/pandas/tests/apply/test_series_apply.py
+++ b/pandas/tests/apply/test_series_apply.py
@@ -242,7 +242,7 @@ def test_apply_categorical(by_row):
assert result.dtype == object
-@pytest.mark.parametrize("series", [["1-1", "1-1", np.NaN], ["1-1", "1-2", np.NaN]])
+@pytest.mark.parametrize("series", [["1-1", "1-1", np.nan], ["1-1", "1-2", np.nan]])
def test_apply_categorical_with_nan_values(series, by_row):
# GH 20714 bug fixed in: GH 24275
s = Series(series, dtype="category")
@@ -254,7 +254,7 @@ def test_apply_categorical_with_nan_values(series, by_row):
result = s.apply(lambda x: x.split("-")[0], by_row=by_row)
result = result.astype(object)
- expected = Series(["1", "1", np.NaN], dtype="category")
+ expected = Series(["1", "1", np.nan], dtype="category")
expected = expected.astype(object)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/arrays/categorical/test_analytics.py b/pandas/tests/arrays/categorical/test_analytics.py
index c42364d4d4377..c2c53fbc4637e 100644
--- a/pandas/tests/arrays/categorical/test_analytics.py
+++ b/pandas/tests/arrays/categorical/test_analytics.py
@@ -73,8 +73,8 @@ def test_min_max_reduce(self):
@pytest.mark.parametrize(
"categories,expected",
[
- (list("ABC"), np.NaN),
- ([1, 2, 3], np.NaN),
+ (list("ABC"), np.nan),
+ ([1, 2, 3], np.nan),
pytest.param(
Series(date_range("2020-01-01", periods=3), dtype="category"),
NaT,
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index e16ef37e8799d..761b85287764f 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -129,7 +129,7 @@ def test_set_na(self, left_right_dtypes):
# GH#45484 TypeError, not ValueError, matches what we get with
# non-NA un-holdable value.
with pytest.raises(TypeError, match=msg):
- result[0] = np.NaN
+ result[0] = np.nan
return
result[0] = np.nan
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index f958d25e51103..9c630e29ea8e6 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -725,7 +725,7 @@ def test_and_logic_string_match(self):
class TestTypeCasting:
@pytest.mark.parametrize("op", ["+", "-", "*", "**", "/"])
# maybe someday... numexpr has too many upcasting rules now
- # chain(*(np.sctypes[x] for x in ['uint', 'int', 'float']))
+ # chain(*(np.core.sctypes[x] for x in ['uint', 'int', 'float']))
@pytest.mark.parametrize("dt", [np.float32, np.float64])
@pytest.mark.parametrize("left_right", [("df", "3"), ("3", "df")])
def test_binop_typecasting(self, engine, parser, op, dt, left_right):
diff --git a/pandas/tests/dtypes/cast/test_infer_dtype.py b/pandas/tests/dtypes/cast/test_infer_dtype.py
index b5d761b3549fa..ed08df74461ef 100644
--- a/pandas/tests/dtypes/cast/test_infer_dtype.py
+++ b/pandas/tests/dtypes/cast/test_infer_dtype.py
@@ -42,7 +42,7 @@ def test_infer_dtype_from_float_scalar(float_numpy_dtype):
@pytest.mark.parametrize(
- "data,exp_dtype", [(12, np.int64), (np.float_(12), np.float64)]
+ "data,exp_dtype", [(12, np.int64), (np.float64(12), np.float64)]
)
def test_infer_dtype_from_python_scalar(data, exp_dtype):
dtype, val = infer_dtype_from_scalar(data)
@@ -58,7 +58,7 @@ def test_infer_dtype_from_boolean(bool_val):
def test_infer_dtype_from_complex(complex_dtype):
data = np.dtype(complex_dtype).type(1)
dtype, val = infer_dtype_from_scalar(data)
- assert dtype == np.complex_
+ assert dtype == np.complex128
def test_infer_dtype_from_datetime():
@@ -153,7 +153,7 @@ def test_infer_dtype_from_scalar_errors():
("foo", np.object_),
(b"foo", np.object_),
(1, np.int64),
- (1.5, np.float_),
+ (1.5, np.float64),
(np.datetime64("2016-01-01"), np.dtype("M8[s]")),
(Timestamp("20160101"), np.dtype("M8[s]")),
(Timestamp("20160101", tz="UTC"), "datetime64[s, UTC]"),
@@ -173,7 +173,7 @@ def test_infer_dtype_from_scalar(value, expected):
([1], np.int_),
(np.array([1], dtype=np.int64), np.int64),
([np.nan, 1, ""], np.object_),
- (np.array([[1.0, 2.0]]), np.float_),
+ (np.array([[1.0, 2.0]]), np.float64),
(Categorical(list("aabc")), "category"),
(Categorical([1, 2, 3]), "category"),
(date_range("20160101", periods=3), np.dtype("=M8[ns]")),
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 0043ace1b9590..471e456146178 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -652,7 +652,7 @@ def test_is_complex_dtype():
assert not com.is_complex_dtype(pd.Series([1, 2]))
assert not com.is_complex_dtype(np.array(["a", "b"]))
- assert com.is_complex_dtype(np.complex_)
+ assert com.is_complex_dtype(np.complex128)
assert com.is_complex_dtype(complex)
assert com.is_complex_dtype(np.array([1 + 1j, 5]))
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 375003e58c21a..df7c787d2b9bf 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -536,7 +536,7 @@ def test_isneginf_scalar(self, value, expected):
)
def test_maybe_convert_nullable_boolean(self, convert_to_masked_nullable, exp):
# GH 40687
- arr = np.array([True, np.NaN], dtype=object)
+ arr = np.array([True, np.nan], dtype=object)
result = libops.maybe_convert_bool(
arr, set(), convert_to_masked_nullable=convert_to_masked_nullable
)
@@ -862,7 +862,7 @@ def test_maybe_convert_objects_timedelta64_nat(self):
)
def test_maybe_convert_objects_nullable_integer(self, exp):
# GH27335
- arr = np.array([2, np.NaN], dtype=object)
+ arr = np.array([2, np.nan], dtype=object)
result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
tm.assert_extension_array_equal(result, exp)
@@ -890,7 +890,7 @@ def test_maybe_convert_numeric_nullable_integer(
self, convert_to_masked_nullable, exp
):
# GH 40687
- arr = np.array([2, np.NaN], dtype=object)
+ arr = np.array([2, np.nan], dtype=object)
result = lib.maybe_convert_numeric(
arr, set(), convert_to_masked_nullable=convert_to_masked_nullable
)
@@ -1889,7 +1889,6 @@ def test_is_scalar_numpy_array_scalars(self):
assert is_scalar(np.complex64(2))
assert is_scalar(np.object_("foobar"))
assert is_scalar(np.str_("foobar"))
- assert is_scalar(np.unicode_("foobar"))
assert is_scalar(np.bytes_(b"foobar"))
assert is_scalar(np.datetime64("2014-01-01"))
assert is_scalar(np.timedelta64(1, "h"))
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 170f4f49ba377..451ac2afd1d91 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -51,7 +51,7 @@
def test_notna_notnull(notna_f):
assert notna_f(1.0)
assert not notna_f(None)
- assert not notna_f(np.NaN)
+ assert not notna_f(np.nan)
msg = "use_inf_as_na option is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
@@ -112,7 +112,7 @@ def test_empty_object(self, shape):
def test_isna_isnull(self, isna_f):
assert not isna_f(1.0)
assert isna_f(None)
- assert isna_f(np.NaN)
+ assert isna_f(np.nan)
assert float("nan")
assert not isna_f(np.inf)
assert not isna_f(-np.inf)
@@ -156,7 +156,7 @@ def test_isna_lists(self):
tm.assert_numpy_array_equal(result, exp)
# GH20675
- result = isna([np.NaN, "world"])
+ result = isna([np.nan, "world"])
exp = np.array([True, False])
tm.assert_numpy_array_equal(result, exp)
diff --git a/pandas/tests/frame/constructors/test_from_records.py b/pandas/tests/frame/constructors/test_from_records.py
index 95f9f2ba4051e..59dca5055f170 100644
--- a/pandas/tests/frame/constructors/test_from_records.py
+++ b/pandas/tests/frame/constructors/test_from_records.py
@@ -269,7 +269,7 @@ def test_from_records_series_categorical_index(self):
series_of_dicts = Series([{"a": 1}, {"a": 2}, {"b": 3}], index=index)
frame = DataFrame.from_records(series_of_dicts, index=index)
expected = DataFrame(
- {"a": [1, 2, np.NaN], "b": [np.NaN, np.NaN, 3]}, index=index
+ {"a": [1, 2, np.nan], "b": [np.nan, np.nan, 3]}, index=index
)
tm.assert_frame_equal(frame, expected)
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 34cbebe1b3d3f..6590f10c6b967 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -164,7 +164,7 @@ def test_astype_str(self):
def test_astype_str_float(self):
# see GH#11302
- result = DataFrame([np.NaN]).astype(str)
+ result = DataFrame([np.nan]).astype(str)
expected = DataFrame(["nan"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_clip.py b/pandas/tests/frame/methods/test_clip.py
index 710978057460a..9bd032a0aefc4 100644
--- a/pandas/tests/frame/methods/test_clip.py
+++ b/pandas/tests/frame/methods/test_clip.py
@@ -166,7 +166,7 @@ def test_clip_with_na_args(self, float_frame):
# GH#40420
data = {"col_0": [9, -3, 0, -1, 5], "col_1": [-2, -7, 6, 8, -5]}
df = DataFrame(data)
- t = Series([2, -4, np.NaN, 6, 3])
+ t = Series([2, -4, np.nan, 6, 3])
result = df.clip(lower=t, axis=0)
expected = DataFrame({"col_0": [9, -3, 0, 6, 5], "col_1": [2, -4, 6, 8, 3]})
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_describe.py b/pandas/tests/frame/methods/test_describe.py
index e2f92a1e04cb5..f56a7896c753e 100644
--- a/pandas/tests/frame/methods/test_describe.py
+++ b/pandas/tests/frame/methods/test_describe.py
@@ -332,7 +332,7 @@ def test_describe_percentiles_integer_idx(self):
result = df.describe(percentiles=pct)
expected = DataFrame(
- {"x": [1.0, 1.0, np.NaN, 1.0, *(1.0 for _ in pct), 1.0]},
+ {"x": [1.0, 1.0, np.nan, 1.0, *(1.0 for _ in pct), 1.0]},
index=[
"count",
"mean",
diff --git a/pandas/tests/frame/methods/test_dropna.py b/pandas/tests/frame/methods/test_dropna.py
index 11edf665b5494..7899b4aeac3fd 100644
--- a/pandas/tests/frame/methods/test_dropna.py
+++ b/pandas/tests/frame/methods/test_dropna.py
@@ -231,7 +231,7 @@ def test_dropna_with_duplicate_columns(self):
def test_set_single_column_subset(self):
# GH 41021
- df = DataFrame({"A": [1, 2, 3], "B": list("abc"), "C": [4, np.NaN, 5]})
+ df = DataFrame({"A": [1, 2, 3], "B": list("abc"), "C": [4, np.nan, 5]})
expected = DataFrame(
{"A": [1, 3], "B": list("ac"), "C": [4.0, 5.0]}, index=[0, 2]
)
@@ -248,7 +248,7 @@ def test_single_column_not_present_in_axis(self):
def test_subset_is_nparray(self):
# GH 41021
- df = DataFrame({"A": [1, 2, np.NaN], "B": list("abc"), "C": [4, np.NaN, 5]})
+ df = DataFrame({"A": [1, 2, np.nan], "B": list("abc"), "C": [4, np.nan, 5]})
expected = DataFrame({"A": [1.0], "B": ["a"], "C": [4.0]})
result = df.dropna(subset=np.array(["A", "C"]))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_dtypes.py b/pandas/tests/frame/methods/test_dtypes.py
index 6f21bd4c4b438..4bdf16977dae6 100644
--- a/pandas/tests/frame/methods/test_dtypes.py
+++ b/pandas/tests/frame/methods/test_dtypes.py
@@ -62,15 +62,15 @@ def test_datetime_with_tz_dtypes(self):
def test_dtypes_are_correct_after_column_slice(self):
# GH6525
- df = DataFrame(index=range(5), columns=list("abc"), dtype=np.float_)
+ df = DataFrame(index=range(5), columns=list("abc"), dtype=np.float64)
tm.assert_series_equal(
df.dtypes,
- Series({"a": np.float_, "b": np.float_, "c": np.float_}),
+ Series({"a": np.float64, "b": np.float64, "c": np.float64}),
)
- tm.assert_series_equal(df.iloc[:, 2:].dtypes, Series({"c": np.float_}))
+ tm.assert_series_equal(df.iloc[:, 2:].dtypes, Series({"c": np.float64}))
tm.assert_series_equal(
df.dtypes,
- Series({"a": np.float_, "b": np.float_, "c": np.float_}),
+ Series({"a": np.float64, "b": np.float64, "c": np.float64}),
)
@pytest.mark.parametrize(
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 3203482ddf724..61e44b4e24c08 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -666,7 +666,7 @@ def test_replace_NA_with_None(self):
def test_replace_NAT_with_None(self):
# gh-45836
df = DataFrame([pd.NaT, pd.NaT])
- result = df.replace({pd.NaT: None, np.NaN: None})
+ result = df.replace({pd.NaT: None, np.nan: None})
expected = DataFrame([None, None])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_select_dtypes.py b/pandas/tests/frame/methods/test_select_dtypes.py
index 3bfb1af423bdd..67dd5b6217187 100644
--- a/pandas/tests/frame/methods/test_select_dtypes.py
+++ b/pandas/tests/frame/methods/test_select_dtypes.py
@@ -340,7 +340,7 @@ def test_select_dtypes_datetime_with_tz(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
- "dtype", [str, "str", np.string_, "S1", "unicode", np.unicode_, "U1"]
+ "dtype", [str, "str", np.bytes_, "S1", "unicode", np.str_, "U1"]
)
@pytest.mark.parametrize("arg", ["include", "exclude"])
def test_select_dtypes_str_raises(self, dtype, arg):
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index 35941e9f24a4e..808f0cff2485c 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -681,10 +681,10 @@ def test_shift_with_iterable_basic_functionality(self):
{
"a_0": [1, 2, 3],
"b_0": [4, 5, 6],
- "a_1": [np.NaN, 1.0, 2.0],
- "b_1": [np.NaN, 4.0, 5.0],
- "a_2": [np.NaN, np.NaN, 1.0],
- "b_2": [np.NaN, np.NaN, 4.0],
+ "a_1": [np.nan, 1.0, 2.0],
+ "b_1": [np.nan, 4.0, 5.0],
+ "a_2": [np.nan, np.nan, 1.0],
+ "b_2": [np.nan, np.nan, 4.0],
}
)
tm.assert_frame_equal(expected, shifted)
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 008d7a023576a..9e8d92e832d01 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -131,22 +131,22 @@ def test_constructor_with_convert(self):
df = DataFrame({"A": [None, 1]})
result = df["A"]
- expected = Series(np.asarray([np.nan, 1], np.float_), name="A")
+ expected = Series(np.asarray([np.nan, 1], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0, 2]})
result = df["A"]
- expected = Series(np.asarray([1.0, 2], np.float_), name="A")
+ expected = Series(np.asarray([1.0, 2], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, 3]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, 3.0]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, True]})
@@ -156,12 +156,12 @@ def test_constructor_with_convert(self):
df = DataFrame({"A": [1.0, None]})
result = df["A"]
- expected = Series(np.asarray([1.0, np.nan], np.float_), name="A")
+ expected = Series(np.asarray([1.0, np.nan], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, None]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, np.nan], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, np.nan], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [2.0, 1, True, None]})
@@ -343,9 +343,9 @@ def test_stale_cached_series_bug_473(self, using_copy_on_write):
Y["e"] = Y["e"].astype("object")
if using_copy_on_write:
with tm.raises_chained_assignment_error():
- Y["g"]["c"] = np.NaN
+ Y["g"]["c"] = np.nan
else:
- Y["g"]["c"] = np.NaN
+ Y["g"]["c"] = np.nan
repr(Y)
Y.sum()
Y["g"].sum()
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index a493084142f7b..c170704150383 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1781,8 +1781,6 @@ def test_constructor_empty_with_string_dtype(self):
tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_)
tm.assert_frame_equal(df, expected)
- df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_)
- tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype="U5")
tm.assert_frame_equal(df, expected)
@@ -1826,7 +1824,7 @@ def test_constructor_single_value(self):
def test_constructor_with_datetimes(self):
intname = np.dtype(np.int_).name
- floatname = np.dtype(np.float_).name
+ floatname = np.dtype(np.float64).name
objectname = np.dtype(np.object_).name
# single item
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index ab36934533beb..e7b6a0c0b39b0 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -134,7 +134,7 @@ def wrapper(x):
# all NA case
if has_skipna:
- all_na = frame * np.NaN
+ all_na = frame * np.nan
r0 = getattr(all_na, opname)(axis=0)
r1 = getattr(all_na, opname)(axis=1)
if opname in ["sum", "prod"]:
@@ -834,9 +834,9 @@ def test_sum_nanops_min_count(self):
@pytest.mark.parametrize(
"kwargs, expected_result",
[
- ({"axis": 1, "min_count": 2}, [3.2, 5.3, np.NaN]),
- ({"axis": 1, "min_count": 3}, [np.NaN, np.NaN, np.NaN]),
- ({"axis": 1, "skipna": False}, [3.2, 5.3, np.NaN]),
+ ({"axis": 1, "min_count": 2}, [3.2, 5.3, np.nan]),
+ ({"axis": 1, "min_count": 3}, [np.nan, np.nan, np.nan]),
+ ({"axis": 1, "skipna": False}, [3.2, 5.3, np.nan]),
],
)
def test_sum_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@@ -850,9 +850,9 @@ def test_sum_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@pytest.mark.parametrize(
"kwargs, expected_result",
[
- ({"axis": 1, "min_count": 2}, [2.0, 4.0, np.NaN]),
- ({"axis": 1, "min_count": 3}, [np.NaN, np.NaN, np.NaN]),
- ({"axis": 1, "skipna": False}, [2.0, 4.0, np.NaN]),
+ ({"axis": 1, "min_count": 2}, [2.0, 4.0, np.nan]),
+ ({"axis": 1, "min_count": 3}, [np.nan, np.nan, np.nan]),
+ ({"axis": 1, "skipna": False}, [2.0, 4.0, np.nan]),
],
)
def test_prod_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@@ -1189,7 +1189,7 @@ def wrapper(x):
f(axis=2)
# all NA case
- all_na = frame * np.NaN
+ all_na = frame * np.nan
r0 = getattr(all_na, opname)(axis=0)
r1 = getattr(all_na, opname)(axis=1)
if opname == "any":
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index cb8e8c5025e3b..c90b871d5d66f 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -72,7 +72,7 @@ def test_stack_mixed_level(self, future_stack):
def test_unstack_not_consolidated(self, using_array_manager):
# Gh#34708
- df = DataFrame({"x": [1, 2, np.NaN], "y": [3.0, 4, np.NaN]})
+ df = DataFrame({"x": [1, 2, np.nan], "y": [3.0, 4, np.nan]})
df2 = df[["x"]]
df2["y"] = df["y"]
if not using_array_manager:
@@ -584,7 +584,7 @@ def test_unstack_to_series(self, float_frame):
tm.assert_frame_equal(undo, float_frame)
# check NA handling
- data = DataFrame({"x": [1, 2, np.NaN], "y": [3.0, 4, np.NaN]})
+ data = DataFrame({"x": [1, 2, np.nan], "y": [3.0, 4, np.nan]})
data.index = Index(["a", "b", "c"])
result = data.unstack()
@@ -592,7 +592,7 @@ def test_unstack_to_series(self, float_frame):
levels=[["x", "y"], ["a", "b", "c"]],
codes=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]],
)
- expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx)
+ expected = Series([1, 2, np.nan, 3, 4, np.nan], index=midx)
tm.assert_series_equal(result, expected)
@@ -902,9 +902,9 @@ def cast(val):
def test_unstack_nan_index2(self):
# GH7403
df = DataFrame({"A": list("aaaabbbb"), "B": range(8), "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [
@@ -921,9 +921,9 @@ def test_unstack_nan_index2(self):
tm.assert_frame_equal(left, right)
df = DataFrame({"A": list("aaaabbbb"), "B": list(range(4)) * 2, "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[2, 1] = np.NaN
+ df.iloc[2, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [[2, np.nan], [0, 4], [1, 5], [np.nan, 6], [3, 7]]
@@ -935,9 +935,9 @@ def test_unstack_nan_index2(self):
tm.assert_frame_equal(left, right)
df = DataFrame({"A": list("aaaabbbb"), "B": list(range(4)) * 2, "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [[3, np.nan], [0, 4], [1, 5], [2, 6], [np.nan, 7]]
@@ -958,7 +958,7 @@ def test_unstack_nan_index3(self, using_array_manager):
}
)
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack()
vals = np.array([[3, 0, 1, 2, np.nan, 4], [np.nan, 5, 6, 7, 8, 9]])
@@ -1754,7 +1754,7 @@ def test_stack_mixed_dtype(self, multiindex_dataframe_random_data, future_stack)
result = df["foo"].stack(future_stack=future_stack).sort_index()
tm.assert_series_equal(stacked["foo"], result, check_names=False)
assert result.name is None
- assert stacked["bar"].dtype == np.float_
+ assert stacked["bar"].dtype == np.float64
def test_unstack_bug(self, future_stack):
df = DataFrame(
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index d0ae9eeed394f..68ce58ad23690 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -18,7 +18,7 @@
from pandas.tests.groupby import get_groupby_method_args
-def cartesian_product_for_groupers(result, args, names, fill_value=np.NaN):
+def cartesian_product_for_groupers(result, args, names, fill_value=np.nan):
"""Reindex to a cartesian production for the groupers,
preserving the nature (Categorical) of each grouper
"""
@@ -42,28 +42,28 @@ def f(a):
# These expected values can be used across several tests (i.e. they are
# the same for SeriesGroupBy and DataFrameGroupBy) but they should only be
# hardcoded in one place.
- "all": np.NaN,
- "any": np.NaN,
+ "all": np.nan,
+ "any": np.nan,
"count": 0,
- "corrwith": np.NaN,
- "first": np.NaN,
- "idxmax": np.NaN,
- "idxmin": np.NaN,
- "last": np.NaN,
- "max": np.NaN,
- "mean": np.NaN,
- "median": np.NaN,
- "min": np.NaN,
- "nth": np.NaN,
+ "corrwith": np.nan,
+ "first": np.nan,
+ "idxmax": np.nan,
+ "idxmin": np.nan,
+ "last": np.nan,
+ "max": np.nan,
+ "mean": np.nan,
+ "median": np.nan,
+ "min": np.nan,
+ "nth": np.nan,
"nunique": 0,
- "prod": np.NaN,
- "quantile": np.NaN,
- "sem": np.NaN,
+ "prod": np.nan,
+ "quantile": np.nan,
+ "sem": np.nan,
"size": 0,
- "skew": np.NaN,
- "std": np.NaN,
+ "skew": np.nan,
+ "std": np.nan,
"sum": 0,
- "var": np.NaN,
+ "var": np.nan,
}
@@ -1750,8 +1750,8 @@ def test_series_groupby_first_on_categorical_col_grouped_on_2_categoricals(
cat2 = Categorical([0, 1])
idx = MultiIndex.from_product([cat2, cat2], names=["a", "b"])
expected_dict = {
- "first": Series([0, np.NaN, np.NaN, 1], idx, name="c"),
- "last": Series([1, np.NaN, np.NaN, 0], idx, name="c"),
+ "first": Series([0, np.nan, np.nan, 1], idx, name="c"),
+ "last": Series([1, np.nan, np.nan, 0], idx, name="c"),
}
expected = expected_dict[func]
@@ -1775,8 +1775,8 @@ def test_df_groupby_first_on_categorical_col_grouped_on_2_categoricals(
cat2 = Categorical([0, 1])
idx = MultiIndex.from_product([cat2, cat2], names=["a", "b"])
expected_dict = {
- "first": Series([0, np.NaN, np.NaN, 1], idx, name="c"),
- "last": Series([1, np.NaN, np.NaN, 0], idx, name="c"),
+ "first": Series([0, np.nan, np.nan, 1], idx, name="c"),
+ "last": Series([1, np.nan, np.nan, 0], idx, name="c"),
}
expected = expected_dict[func].to_frame()
diff --git a/pandas/tests/groupby/test_counting.py b/pandas/tests/groupby/test_counting.py
index fd5018d05380c..6c27344ce3110 100644
--- a/pandas/tests/groupby/test_counting.py
+++ b/pandas/tests/groupby/test_counting.py
@@ -232,7 +232,7 @@ def test_count_with_only_nans_in_first_group(self):
def test_count_groupby_column_with_nan_in_groupby_column(self):
# https://github.com/pandas-dev/pandas/issues/32841
- df = DataFrame({"A": [1, 1, 1, 1, 1], "B": [5, 4, np.NaN, 3, 0]})
+ df = DataFrame({"A": [1, 1, 1, 1, 1], "B": [5, 4, np.nan, 3, 0]})
res = df.groupby(["B"]).count()
expected = DataFrame(
index=Index([0.0, 3.0, 4.0, 5.0], name="B"), data={"A": [1, 1, 1, 1]}
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index 26881bdd18274..5d85a0783e024 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -578,7 +578,7 @@ def test_rank_min_int():
result = df.groupby("grp").rank()
expected = DataFrame(
- {"int_col": [1.0, 2.0, 1.0], "datetimelike": [np.NaN, 1.0, np.NaN]}
+ {"int_col": [1.0, 2.0, 1.0], "datetimelike": [np.nan, 1.0, np.nan]}
)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/methods/test_astype.py b/pandas/tests/indexes/datetimes/methods/test_astype.py
index 94cf86b7fb9c5..d339639dc5def 100644
--- a/pandas/tests/indexes/datetimes/methods/test_astype.py
+++ b/pandas/tests/indexes/datetimes/methods/test_astype.py
@@ -20,7 +20,7 @@
class TestDatetimeIndex:
def test_astype(self):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], name="idx")
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], name="idx")
result = idx.astype(object)
expected = Index(
@@ -84,7 +84,7 @@ def test_astype_str_nat(self):
# GH 13149, GH 13209
# verify that we are returning NaT as a string (and not unicode)
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN])
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan])
result = idx.astype(str)
expected = Index(["2016-05-16", "NaT", "NaT", "NaT"], dtype=object)
tm.assert_index_equal(result, expected)
@@ -141,7 +141,7 @@ def test_astype_str_freq_and_tz(self):
def test_astype_datetime64(self):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], name="idx")
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], name="idx")
result = idx.astype("datetime64[ns]")
tm.assert_index_equal(result, idx)
@@ -151,7 +151,7 @@ def test_astype_datetime64(self):
tm.assert_index_equal(result, idx)
assert result is idx
- idx_tz = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], tz="EST", name="idx")
+ idx_tz = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], tz="EST", name="idx")
msg = "Cannot use .astype to convert from timezone-aware"
with pytest.raises(TypeError, match=msg):
# dt64tz->dt64 deprecated
@@ -202,7 +202,7 @@ def test_astype_object_with_nat(self):
)
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN])
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan])
msg = "Cannot cast DatetimeIndex to dtype"
if dtype == "datetime64":
msg = "Casting to unit-less dtype 'datetime64' is not supported"
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 6f3c83b999e94..09b06ecd5630d 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -538,8 +538,8 @@ def test_dti_tz_localize_ambiguous_nat(self, tz):
times = [
"11/06/2011 00:00",
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
"11/06/2011 02:00",
"11/06/2011 03:00",
]
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 49e8df2b71f22..aff4944e7bd55 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -247,12 +247,12 @@ def test_is_unique_interval(self, closed):
assert idx.is_unique is True
# unique NaN
- idx = IntervalIndex.from_tuples([(np.NaN, np.NaN)], closed=closed)
+ idx = IntervalIndex.from_tuples([(np.nan, np.nan)], closed=closed)
assert idx.is_unique is True
# non-unique NaN
idx = IntervalIndex.from_tuples(
- [(np.NaN, np.NaN), (np.NaN, np.NaN)], closed=closed
+ [(np.nan, np.nan), (np.nan, np.nan)], closed=closed
)
assert idx.is_unique is False
diff --git a/pandas/tests/indexes/multi/test_join.py b/pandas/tests/indexes/multi/test_join.py
index c5a3512113655..700af142958b3 100644
--- a/pandas/tests/indexes/multi/test_join.py
+++ b/pandas/tests/indexes/multi/test_join.py
@@ -217,7 +217,7 @@ def test_join_multi_with_nan():
)
df2 = DataFrame(
data={"col2": [2.1, 2.2]},
- index=MultiIndex.from_product([["A"], [np.NaN, 2.0]], names=["id1", "id2"]),
+ index=MultiIndex.from_product([["A"], [np.nan, 2.0]], names=["id1", "id2"]),
)
result = df1.join(df2)
expected = DataFrame(
diff --git a/pandas/tests/indexes/period/methods/test_astype.py b/pandas/tests/indexes/period/methods/test_astype.py
index 2a605d136175e..e54cd73a35f59 100644
--- a/pandas/tests/indexes/period/methods/test_astype.py
+++ b/pandas/tests/indexes/period/methods/test_astype.py
@@ -17,14 +17,14 @@ class TestPeriodIndexAsType:
@pytest.mark.parametrize("dtype", [float, "timedelta64", "timedelta64[ns]"])
def test_astype_raises(self, dtype):
# GH#13149, GH#13209
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq="D")
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D")
msg = "Cannot cast PeriodIndex to dtype"
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
def test_astype_conversion(self):
# GH#13149, GH#13209
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq="D", name="idx")
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D", name="idx")
result = idx.astype(object)
expected = Index(
diff --git a/pandas/tests/indexes/period/test_pickle.py b/pandas/tests/indexes/period/test_pickle.py
index 82f906d1e361f..cb981ab10064f 100644
--- a/pandas/tests/indexes/period/test_pickle.py
+++ b/pandas/tests/indexes/period/test_pickle.py
@@ -14,7 +14,7 @@
class TestPickle:
@pytest.mark.parametrize("freq", ["D", "M", "A"])
def test_pickle_round_trip(self, freq):
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq=freq)
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq=freq)
result = tm.round_trip_pickle(idx)
tm.assert_index_equal(result, idx)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index b3fb5a26ca63f..ffa0b115e34fb 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -473,7 +473,7 @@ def test_empty_fancy(self, index, dtype):
def test_empty_fancy_raises(self, index):
# DatetimeIndex is excluded, because it overrides getitem and should
# be tested separately.
- empty_farr = np.array([], dtype=np.float_)
+ empty_farr = np.array([], dtype=np.float64)
empty_index = type(index)([], dtype=index.dtype)
assert index[[]].identical(empty_index)
diff --git a/pandas/tests/indexes/timedeltas/methods/test_astype.py b/pandas/tests/indexes/timedeltas/methods/test_astype.py
index 9b17a8af59ac5..f69f0fd3d78e2 100644
--- a/pandas/tests/indexes/timedeltas/methods/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/methods/test_astype.py
@@ -45,7 +45,7 @@ def test_astype_object_with_nat(self):
def test_astype(self):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN], name="idx")
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan], name="idx")
result = idx.astype(object)
expected = Index(
@@ -78,7 +78,7 @@ def test_astype_uint(self):
def test_astype_timedelta64(self):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN])
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan])
msg = (
r"Cannot convert from timedelta64\[ns\] to timedelta64. "
@@ -98,7 +98,7 @@ def test_astype_timedelta64(self):
@pytest.mark.parametrize("dtype", [float, "datetime64", "datetime64[ns]"])
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN])
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan])
msg = "Cannot cast TimedeltaIndex to dtype"
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index abbf22a7fc70a..d0b6adfda0241 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -201,7 +201,7 @@ def test_column_types_consistent(self):
df = DataFrame(
data={
"channel": [1, 2, 3],
- "A": ["String 1", np.NaN, "String 2"],
+ "A": ["String 1", np.nan, "String 2"],
"B": [
Timestamp("2019-06-11 11:00:00"),
pd.NaT,
diff --git a/pandas/tests/interchange/test_impl.py b/pandas/tests/interchange/test_impl.py
index 2a65937a82200..8a25a2c1889f3 100644
--- a/pandas/tests/interchange/test_impl.py
+++ b/pandas/tests/interchange/test_impl.py
@@ -32,7 +32,7 @@ def string_data():
"234,3245.67",
"gSaf,qWer|Gre",
"asd3,4sad|",
- np.NaN,
+ np.nan,
]
}
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 97d9f13bd9e9e..b7108896f01ed 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -473,7 +473,7 @@ def test_set_change_dtype(self, mgr):
mgr2.iset(
mgr2.items.get_loc("quux"), np.random.default_rng(2).standard_normal(N)
)
- assert mgr2.iget(idx).dtype == np.float_
+ assert mgr2.iget(idx).dtype == np.float64
def test_copy(self, mgr):
cp = mgr.copy(deep=False)
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 32509a799fa69..c8e984a92f418 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -181,7 +181,7 @@ def test_to_csv_na_rep(self):
# see gh-11553
#
# Testing if NaN values are correctly represented in the index.
- df = DataFrame({"a": [0, np.NaN], "b": [0, 1], "c": [2, 3]})
+ df = DataFrame({"a": [0, np.nan], "b": [0, 1], "c": [2, 3]})
expected_rows = ["a,b,c", "0.0,0,2", "_,1,3"]
expected = tm.convert_rows_list_to_csv_str(expected_rows)
@@ -189,7 +189,7 @@ def test_to_csv_na_rep(self):
assert df.set_index(["a", "b"]).to_csv(na_rep="_") == expected
# now with an index containing only NaNs
- df = DataFrame({"a": np.NaN, "b": [0, 1], "c": [2, 3]})
+ df = DataFrame({"a": np.nan, "b": [0, 1], "c": [2, 3]})
expected_rows = ["a,b,c", "_,0,2", "_,1,3"]
expected = tm.convert_rows_list_to_csv_str(expected_rows)
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index eecacf29de872..c79fdd9145a6a 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -46,7 +46,7 @@
def test_read_csv_with_custom_date_parser(all_parsers):
# GH36111
def __custom_date_parser(time):
- time = time.astype(np.float_)
+ time = time.astype(np.float64)
time = time.astype(np.int_) # convert float seconds to int type
return pd.to_timedelta(time, unit="s")
@@ -86,7 +86,7 @@ def __custom_date_parser(time):
def test_read_csv_with_custom_date_parser_parse_dates_false(all_parsers):
# GH44366
def __custom_date_parser(time):
- time = time.astype(np.float_)
+ time = time.astype(np.float64)
time = time.astype(np.int_) # convert float seconds to int type
return pd.to_timedelta(time, unit="s")
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 3cfd86049588b..7459aa1df8f3e 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1475,9 +1475,9 @@ def test_stata_111(self, datapath):
df = read_stata(datapath("io", "data", "stata", "stata7_111.dta"))
original = DataFrame(
{
- "y": [1, 1, 1, 1, 1, 0, 0, np.NaN, 0, 0],
- "x": [1, 2, 1, 3, np.NaN, 4, 3, 5, 1, 6],
- "w": [2, np.NaN, 5, 2, 4, 4, 3, 1, 2, 3],
+ "y": [1, 1, 1, 1, 1, 0, 0, np.nan, 0, 0],
+ "x": [1, 2, 1, 3, np.nan, 4, 3, 5, 1, 6],
+ "w": [2, np.nan, 5, 2, 4, 4, 3, 1, 2, 3],
"z": ["a", "b", "c", "d", "e", "", "g", "h", "i", "j"],
}
)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index afe5b3c66a611..87892a81cef3d 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -867,7 +867,7 @@ def test_idxmin(self):
string_series = tm.makeStringSeries().rename("series")
# add some NaNs
- string_series[5:15] = np.NaN
+ string_series[5:15] = np.nan
# skipna or no
assert string_series[string_series.idxmin()] == string_series.min()
@@ -900,7 +900,7 @@ def test_idxmax(self):
string_series = tm.makeStringSeries().rename("series")
# add some NaNs
- string_series[5:15] = np.NaN
+ string_series[5:15] = np.nan
# skipna or no
assert string_series[string_series.idxmax()] == string_series.max()
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 58c5fc7269aee..55d78c516b6f3 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -99,7 +99,7 @@ def _check_stat_op(
f = getattr(Series, name)
# add some NaNs
- string_series_[5:15] = np.NaN
+ string_series_[5:15] = np.nan
# mean, idxmax, idxmin, min, and max are valid for dates
if name not in ["max", "min", "mean", "median", "std"]:
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 1d72e6d3970ca..dbda751e82113 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -478,7 +478,7 @@ def test_resample_how_method(unit):
)
s.index = s.index.as_unit(unit)
expected = Series(
- [11, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, 22],
+ [11, np.nan, np.nan, np.nan, np.nan, np.nan, 22],
index=DatetimeIndex(
[
Timestamp("2015-03-31 21:48:50"),
@@ -1356,7 +1356,7 @@ def test_resample_consistency(unit):
i30 = date_range("2002-02-02", periods=4, freq="30T").as_unit(unit)
s = Series(np.arange(4.0), index=i30)
- s.iloc[2] = np.NaN
+ s.iloc[2] = np.nan
# Upsample by factor 3 with reindex() and resample() methods:
i10 = date_range(i30[0], i30[-1], freq="10T").as_unit(unit)
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index 0ded7d7e6bfc5..7559a85de7a6b 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -793,7 +793,7 @@ def test_upsampling_ohlc(self, freq, period_mult, kind):
@pytest.mark.parametrize(
"freq, expected_values",
[
- ("1s", [3, np.NaN, 7, 11]),
+ ("1s", [3, np.nan, 7, 11]),
("2s", [3, (7 + 11) / 2]),
("3s", [(3 + 7) / 2, 11]),
],
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 869bf3ace9492..3efcd930af581 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -507,10 +507,10 @@ def test_concat_duplicate_indices_raise(self):
concat([df1, df2], axis=1)
-@pytest.mark.parametrize("dt", np.sctypes["float"])
-def test_concat_no_unnecessary_upcast(dt, frame_or_series):
+def test_concat_no_unnecessary_upcast(float_numpy_dtype, frame_or_series):
# GH 13247
dims = frame_or_series(dtype=object).ndim
+ dt = float_numpy_dtype
dfs = [
frame_or_series(np.array([1], dtype=dt, ndmin=dims)),
@@ -522,8 +522,8 @@ def test_concat_no_unnecessary_upcast(dt, frame_or_series):
@pytest.mark.parametrize("pdt", [Series, DataFrame])
-@pytest.mark.parametrize("dt", np.sctypes["int"])
-def test_concat_will_upcast(dt, pdt):
+def test_concat_will_upcast(pdt, any_signed_int_numpy_dtype):
+ dt = any_signed_int_numpy_dtype
dims = pdt().ndim
dfs = [
pdt(np.array([1], dtype=dt, ndmin=dims)),
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 43786ee15d138..46da18445e135 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1950,7 +1950,7 @@ def test_pivot_table_not_series(self):
result = df.pivot_table("col1", index="col3", columns="col2", aggfunc="sum")
expected = DataFrame(
- [[3, np.NaN, np.NaN], [np.NaN, 4, np.NaN], [np.NaN, np.NaN, 5]],
+ [[3, np.nan, np.nan], [np.nan, 4, np.nan], [np.nan, np.nan, 5]],
index=Index([1, 3, 9], name="col3"),
columns=Index(["C", "D", "E"], name="col2"),
)
@@ -2424,7 +2424,7 @@ def test_pivot_table_aggfunc_nunique_with_different_values(self):
],
names=(None, None, "b"),
)
- nparr = np.full((10, 10), np.NaN)
+ nparr = np.full((10, 10), np.nan)
np.fill_diagonal(nparr, 1.0)
expected = DataFrame(nparr, index=Index(range(10), name="a"), columns=columnval)
diff --git a/pandas/tests/scalar/test_na_scalar.py b/pandas/tests/scalar/test_na_scalar.py
index 213fa1791838d..287b7557f50f9 100644
--- a/pandas/tests/scalar/test_na_scalar.py
+++ b/pandas/tests/scalar/test_na_scalar.py
@@ -103,9 +103,9 @@ def test_comparison_ops(comparison_op, other):
False,
np.bool_(False),
np.int_(0),
- np.float_(0),
+ np.float64(0),
np.int_(-0),
- np.float_(-0),
+ np.float64(-0),
],
)
@pytest.mark.parametrize("asarray", [True, False])
@@ -123,7 +123,7 @@ def test_pow_special(value, asarray):
@pytest.mark.parametrize(
- "value", [1, 1.0, True, np.bool_(True), np.int_(1), np.float_(1)]
+ "value", [1, 1.0, True, np.bool_(True), np.int_(1), np.float64(1)]
)
@pytest.mark.parametrize("asarray", [True, False])
def test_rpow_special(value, asarray):
@@ -133,14 +133,14 @@ def test_rpow_special(value, asarray):
if asarray:
result = result[0]
- elif not isinstance(value, (np.float_, np.bool_, np.int_)):
+ elif not isinstance(value, (np.float64, np.bool_, np.int_)):
# this assertion isn't possible with asarray=True
assert isinstance(result, type(value))
assert result == value
-@pytest.mark.parametrize("value", [-1, -1.0, np.int_(-1), np.float_(-1)])
+@pytest.mark.parametrize("value", [-1, -1.0, np.int_(-1), np.float64(-1)])
@pytest.mark.parametrize("asarray", [True, False])
def test_rpow_minus_one(value, asarray):
if asarray:
diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py
index e7fea9aa597b8..dd810a31c25af 100644
--- a/pandas/tests/series/accessors/test_dt_accessor.py
+++ b/pandas/tests/series/accessors/test_dt_accessor.py
@@ -739,9 +739,9 @@ def test_dt_timetz_accessor(self, tz_naive_fixture):
"input_series, expected_output",
[
[["2020-01-01"], [[2020, 1, 3]]],
- [[pd.NaT], [[np.NaN, np.NaN, np.NaN]]],
+ [[pd.NaT], [[np.nan, np.nan, np.nan]]],
[["2019-12-31", "2019-12-29"], [[2020, 1, 2], [2019, 52, 7]]],
- [["2010-01-01", pd.NaT], [[2009, 53, 5], [np.NaN, np.NaN, np.NaN]]],
+ [["2010-01-01", pd.NaT], [[2009, 53, 5], [np.nan, np.nan, np.nan]]],
# see GH#36032
[["2016-01-08", "2016-01-04"], [[2016, 1, 5], [2016, 1, 1]]],
[["2016-01-07", "2016-01-01"], [[2016, 1, 4], [2015, 53, 5]]],
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 20f8dd1fc5b2a..7b857a487db78 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -196,9 +196,9 @@ def test_setitem_ambiguous_keyerror(indexer_sl):
def test_setitem(datetime_series):
- datetime_series[datetime_series.index[5]] = np.NaN
- datetime_series.iloc[[1, 2, 17]] = np.NaN
- datetime_series.iloc[6] = np.NaN
+ datetime_series[datetime_series.index[5]] = np.nan
+ datetime_series.iloc[[1, 2, 17]] = np.nan
+ datetime_series.iloc[6] = np.nan
assert np.isnan(datetime_series.iloc[6])
assert np.isnan(datetime_series.iloc[2])
datetime_series[np.isnan(datetime_series)] = 5
@@ -304,7 +304,7 @@ def test_underlying_data_conversion(using_copy_on_write):
def test_preserve_refs(datetime_series):
seq = datetime_series.iloc[[5, 10, 15]]
- seq.iloc[1] = np.NaN
+ seq.iloc[1] = np.nan
assert not np.isnan(datetime_series.iloc[10])
diff --git a/pandas/tests/series/methods/test_argsort.py b/pandas/tests/series/methods/test_argsort.py
index bd8b7b34bd402..5bcf42aad1db4 100644
--- a/pandas/tests/series/methods/test_argsort.py
+++ b/pandas/tests/series/methods/test_argsort.py
@@ -27,7 +27,7 @@ def test_argsort_numpy(self, datetime_series):
# with missing values
ts = ser.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
msg = "The behavior of Series.argsort in the presence of NA values"
with tm.assert_produces_warning(
diff --git a/pandas/tests/series/methods/test_asof.py b/pandas/tests/series/methods/test_asof.py
index d5f99f721d323..31c264d74d063 100644
--- a/pandas/tests/series/methods/test_asof.py
+++ b/pandas/tests/series/methods/test_asof.py
@@ -65,8 +65,8 @@ def test_scalar(self):
rng = date_range("1/1/1990", periods=N, freq="53s")
# Explicit cast to float avoid implicit cast when setting nan
ts = Series(np.arange(N), index=rng, dtype="float")
- ts.iloc[5:10] = np.NaN
- ts.iloc[15:20] = np.NaN
+ ts.iloc[5:10] = np.nan
+ ts.iloc[15:20] = np.nan
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
diff --git a/pandas/tests/series/methods/test_combine_first.py b/pandas/tests/series/methods/test_combine_first.py
index c7ca73da9ae66..d2d8eab1cb38b 100644
--- a/pandas/tests/series/methods/test_combine_first.py
+++ b/pandas/tests/series/methods/test_combine_first.py
@@ -36,7 +36,7 @@ def test_combine_first(self):
series = Series(values, index=tm.makeIntIndex(20))
series_copy = series * 2
- series_copy[::2] = np.NaN
+ series_copy[::2] = np.nan
# nothing used from the input
combined = series.combine_first(series_copy)
@@ -70,14 +70,14 @@ def test_combine_first(self):
tm.assert_series_equal(ser, result)
def test_combine_first_dt64(self):
- s0 = to_datetime(Series(["2010", np.NaN]))
- s1 = to_datetime(Series([np.NaN, "2011"]))
+ s0 = to_datetime(Series(["2010", np.nan]))
+ s1 = to_datetime(Series([np.nan, "2011"]))
rs = s0.combine_first(s1)
xp = to_datetime(Series(["2010", "2011"]))
tm.assert_series_equal(rs, xp)
- s0 = to_datetime(Series(["2010", np.NaN]))
- s1 = Series([np.NaN, "2011"])
+ s0 = to_datetime(Series(["2010", np.nan]))
+ s1 = Series([np.nan, "2011"])
rs = s0.combine_first(s1)
xp = Series([datetime(2010, 1, 1), "2011"], dtype="datetime64[ns]")
diff --git a/pandas/tests/series/methods/test_copy.py b/pandas/tests/series/methods/test_copy.py
index 5ebf45090d7b8..77600e0e7d293 100644
--- a/pandas/tests/series/methods/test_copy.py
+++ b/pandas/tests/series/methods/test_copy.py
@@ -27,7 +27,7 @@ def test_copy(self, deep, using_copy_on_write):
else:
assert not np.may_share_memory(ser.values, ser2.values)
- ser2[::2] = np.NaN
+ ser2[::2] = np.nan
if deep is not False or using_copy_on_write:
# Did not modify original Series
diff --git a/pandas/tests/series/methods/test_count.py b/pandas/tests/series/methods/test_count.py
index 90984a2e65cba..9ba163f347198 100644
--- a/pandas/tests/series/methods/test_count.py
+++ b/pandas/tests/series/methods/test_count.py
@@ -12,7 +12,7 @@ class TestSeriesCount:
def test_count(self, datetime_series):
assert datetime_series.count() == len(datetime_series)
- datetime_series[::2] = np.NaN
+ datetime_series[::2] = np.nan
assert datetime_series.count() == np.isfinite(datetime_series).sum()
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 7e4503be2ec47..96c2e1ba6d9bb 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -71,7 +71,7 @@ def test_drop_duplicates_no_duplicates(any_numpy_dtype, keep, values):
class TestSeriesDropDuplicates:
@pytest.fixture(
- params=["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"]
+ params=["int_", "uint", "float64", "str_", "timedelta64[h]", "datetime64[D]"]
)
def dtype(self, request):
return request.param
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index 96c3674541e6b..46bc14da59eb0 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -75,7 +75,7 @@ def test_fillna(self):
tm.assert_series_equal(ts, ts.fillna(method="ffill"))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
exp = Series([0.0, 1.0, 1.0, 3.0, 4.0], index=ts.index)
tm.assert_series_equal(ts.fillna(method="ffill"), exp)
@@ -881,7 +881,7 @@ def test_fillna_bug(self):
def test_ffill(self):
ts = Series([0.0, 1.0, 2.0, 3.0, 4.0], index=tm.makeDateIndex(5))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
tm.assert_series_equal(ts.ffill(), ts.fillna(method="ffill"))
def test_ffill_mixed_dtypes_without_missing_data(self):
@@ -892,7 +892,7 @@ def test_ffill_mixed_dtypes_without_missing_data(self):
def test_bfill(self):
ts = Series([0.0, 1.0, 2.0, 3.0, 4.0], index=tm.makeDateIndex(5))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
tm.assert_series_equal(ts.bfill(), ts.fillna(method="bfill"))
def test_pad_nan(self):
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index a984cd16997aa..619690f400d98 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -94,7 +94,7 @@ def test_interpolate(self, datetime_series):
ts = Series(np.arange(len(datetime_series), dtype=float), datetime_series.index)
ts_copy = ts.copy()
- ts_copy[5:10] = np.NaN
+ ts_copy[5:10] = np.nan
linear_interp = ts_copy.interpolate(method="linear")
tm.assert_series_equal(linear_interp, ts)
@@ -104,7 +104,7 @@ def test_interpolate(self, datetime_series):
).astype(float)
ord_ts_copy = ord_ts.copy()
- ord_ts_copy[5:10] = np.NaN
+ ord_ts_copy[5:10] = np.nan
time_interp = ord_ts_copy.interpolate(method="time")
tm.assert_series_equal(time_interp, ord_ts)
@@ -112,7 +112,7 @@ def test_interpolate(self, datetime_series):
def test_interpolate_time_raises_for_non_timeseries(self):
# When method='time' is used on a non-TimeSeries that contains a null
# value, a ValueError should be raised.
- non_ts = Series([0, 1, 2, np.NaN])
+ non_ts = Series([0, 1, 2, np.nan])
msg = "time-weighted interpolation only works on Series.* with a DatetimeIndex"
with pytest.raises(ValueError, match=msg):
non_ts.interpolate(method="time")
diff --git a/pandas/tests/series/methods/test_map.py b/pandas/tests/series/methods/test_map.py
index 00d1ad99332e9..783e18e541ad8 100644
--- a/pandas/tests/series/methods/test_map.py
+++ b/pandas/tests/series/methods/test_map.py
@@ -104,7 +104,7 @@ def test_map_series_stringdtype(any_string_dtype):
@pytest.mark.parametrize(
"data, expected_dtype",
- [(["1-1", "1-1", np.NaN], "category"), (["1-1", "1-2", np.NaN], object)],
+ [(["1-1", "1-1", np.nan], "category"), (["1-1", "1-2", np.nan], object)],
)
def test_map_categorical_with_nan_values(data, expected_dtype):
# GH 20714 bug fixed in: GH 24275
@@ -114,7 +114,7 @@ def func(val):
s = Series(data, dtype="category")
result = s.map(func, na_action="ignore")
- expected = Series(["1", "1", np.NaN], dtype=expected_dtype)
+ expected = Series(["1", "1", np.nan], dtype=expected_dtype)
tm.assert_series_equal(result, expected)
@@ -229,11 +229,11 @@ def test_map_int():
left = Series({"a": 1.0, "b": 2.0, "c": 3.0, "d": 4})
right = Series({1: 11, 2: 22, 3: 33})
- assert left.dtype == np.float_
+ assert left.dtype == np.float64
assert issubclass(right.dtype.type, np.integer)
merged = left.map(right)
- assert merged.dtype == np.float_
+ assert merged.dtype == np.float64
assert isna(merged["d"])
assert not isna(merged["c"])
diff --git a/pandas/tests/series/methods/test_pct_change.py b/pandas/tests/series/methods/test_pct_change.py
index 38a42062b275e..4dabf7b87e2cd 100644
--- a/pandas/tests/series/methods/test_pct_change.py
+++ b/pandas/tests/series/methods/test_pct_change.py
@@ -40,7 +40,7 @@ def test_pct_change_with_duplicate_axis(self):
result = Series(range(5), common_idx).pct_change(freq="B")
# the reason that the expected should be like this is documented at PR 28681
- expected = Series([np.NaN, np.inf, np.NaN, np.NaN, 3.0], common_idx)
+ expected = Series([np.nan, np.inf, np.nan, np.nan, 3.0], common_idx)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_rank.py b/pandas/tests/series/methods/test_rank.py
index 766a2415d89fb..24cf97c05c0a8 100644
--- a/pandas/tests/series/methods/test_rank.py
+++ b/pandas/tests/series/methods/test_rank.py
@@ -185,7 +185,7 @@ def test_rank_categorical(self):
# Test na_option for rank data
na_ser = Series(
- ["first", "second", "third", "fourth", "fifth", "sixth", np.NaN]
+ ["first", "second", "third", "fourth", "fifth", "sixth", np.nan]
).astype(
CategoricalDtype(
["first", "second", "third", "fourth", "fifth", "sixth", "seventh"],
@@ -195,7 +195,7 @@ def test_rank_categorical(self):
exp_top = Series([2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 1.0])
exp_bot = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0])
- exp_keep = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, np.NaN])
+ exp_keep = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top"), exp_top)
tm.assert_series_equal(na_ser.rank(na_option="bottom"), exp_bot)
@@ -204,7 +204,7 @@ def test_rank_categorical(self):
# Test na_option for rank data with ascending False
exp_top = Series([7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0])
exp_bot = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, 7.0])
- exp_keep = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, np.NaN])
+ exp_keep = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top", ascending=False), exp_top)
tm.assert_series_equal(
@@ -223,12 +223,12 @@ def test_rank_categorical(self):
na_ser.rank(na_option=True, ascending=False)
# Test with pct=True
- na_ser = Series(["first", "second", "third", "fourth", np.NaN]).astype(
+ na_ser = Series(["first", "second", "third", "fourth", np.nan]).astype(
CategoricalDtype(["first", "second", "third", "fourth"], True)
)
exp_top = Series([0.4, 0.6, 0.8, 1.0, 0.2])
exp_bot = Series([0.2, 0.4, 0.6, 0.8, 1.0])
- exp_keep = Series([0.25, 0.5, 0.75, 1.0, np.NaN])
+ exp_keep = Series([0.25, 0.5, 0.75, 1.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top", pct=True), exp_top)
tm.assert_series_equal(na_ser.rank(na_option="bottom", pct=True), exp_bot)
diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py
index 52446f96009d5..2ab1cd13a31d8 100644
--- a/pandas/tests/series/methods/test_reindex.py
+++ b/pandas/tests/series/methods/test_reindex.py
@@ -194,7 +194,7 @@ def test_reindex_int(datetime_series):
reindexed_int = int_ts.reindex(datetime_series.index)
# if NaNs introduced
- assert reindexed_int.dtype == np.float_
+ assert reindexed_int.dtype == np.float64
# NO NaNs introduced
reindexed_int = int_ts.reindex(int_ts.index[::2])
@@ -425,11 +425,11 @@ def test_reindexing_with_float64_NA_log():
s = Series([1.0, NA], dtype=Float64Dtype())
s_reindex = s.reindex(range(3))
result = s_reindex.values._data
- expected = np.array([1, np.NaN, np.NaN])
+ expected = np.array([1, np.nan, np.nan])
tm.assert_numpy_array_equal(result, expected)
with tm.assert_produces_warning(None):
result_log = np.log(s_reindex)
- expected_log = Series([0, np.NaN, np.NaN], dtype=Float64Dtype())
+ expected_log = Series([0, np.nan, np.nan], dtype=Float64Dtype())
tm.assert_series_equal(result_log, expected_log)
diff --git a/pandas/tests/series/methods/test_sort_values.py b/pandas/tests/series/methods/test_sort_values.py
index c3e074dc68c82..4808272879071 100644
--- a/pandas/tests/series/methods/test_sort_values.py
+++ b/pandas/tests/series/methods/test_sort_values.py
@@ -18,7 +18,7 @@ def test_sort_values(self, datetime_series, using_copy_on_write):
tm.assert_series_equal(expected, result)
ts = datetime_series.copy()
- ts[:5] = np.NaN
+ ts[:5] = np.nan
vals = ts.values
result = ts.sort_values()
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 331afc4345616..611f4a7f790a6 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -165,7 +165,7 @@ def test_constructor(self, datetime_series):
assert id(datetime_series.index) == id(derived.index)
# Mixed type Series
- mixed = Series(["hello", np.NaN], index=[0, 1])
+ mixed = Series(["hello", np.nan], index=[0, 1])
assert mixed.dtype == np.object_
assert np.isnan(mixed[1])
@@ -1464,8 +1464,8 @@ def test_fromDict(self):
assert series.dtype == np.float64
def test_fromValue(self, datetime_series):
- nans = Series(np.NaN, index=datetime_series.index, dtype=np.float64)
- assert nans.dtype == np.float_
+ nans = Series(np.nan, index=datetime_series.index, dtype=np.float64)
+ assert nans.dtype == np.float64
assert len(nans) == len(datetime_series)
strings = Series("foo", index=datetime_series.index)
diff --git a/pandas/tests/series/test_cumulative.py b/pandas/tests/series/test_cumulative.py
index 4c5fd2d44e4f4..e6f7b2a5e69e0 100644
--- a/pandas/tests/series/test_cumulative.py
+++ b/pandas/tests/series/test_cumulative.py
@@ -31,7 +31,7 @@ def test_datetime_series(self, datetime_series, func):
# with missing values
ts = datetime_series.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = func(ts)[1::2]
expected = func(np.array(ts.dropna()))
@@ -47,7 +47,7 @@ def test_cummin_cummax(self, datetime_series, method):
tm.assert_numpy_array_equal(result, expected)
ts = datetime_series.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = getattr(ts, method)()[1::2]
expected = ufunc(ts.dropna())
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index 4dab3e8f62598..26046ef9ba295 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -93,7 +93,7 @@ def test_logical_operators_int_dtype_with_float(self):
msg = "Cannot perform.+with a dtyped.+array and scalar of type"
with pytest.raises(TypeError, match=msg):
- s_0123 & np.NaN
+ s_0123 & np.nan
with pytest.raises(TypeError, match=msg):
s_0123 & 3.14
msg = "unsupported operand type.+for &:"
@@ -149,11 +149,11 @@ def test_logical_operators_int_dtype_with_object(self):
# GH#9016: support bitwise op for integer types
s_0123 = Series(range(4), dtype="int64")
- result = s_0123 & Series([False, np.NaN, False, False])
+ result = s_0123 & Series([False, np.nan, False, False])
expected = Series([False] * 4)
tm.assert_series_equal(result, expected)
- s_abNd = Series(["a", "b", np.NaN, "d"])
+ s_abNd = Series(["a", "b", np.nan, "d"])
with pytest.raises(TypeError, match="unsupported.* 'int' and 'str'"):
s_0123 & s_abNd
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 9f17f6d86cf93..cafc69c4d0f20 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -84,7 +84,7 @@ def test_logical_range_select(self, datetime_series):
def test_valid(self, datetime_series):
ts = datetime_series.copy()
ts.index = ts.index._with_freq(None)
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = ts.dropna()
assert len(result) == ts.count()
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index 4c92b5694c43b..f294885fb8f4d 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -83,7 +83,7 @@ def test_string(self, string_series):
str(string_series.astype(int))
# with NaNs
- string_series[5:7] = np.NaN
+ string_series[5:7] = np.nan
str(string_series)
def test_object(self, object_series):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 76784ec726afe..a0062d2b6dd44 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -323,7 +323,7 @@ def check_fun_data(
res = testfunc(testarval, axis=axis, skipna=skipna, **kwargs)
if (
- isinstance(targ, np.complex_)
+ isinstance(targ, np.complex128)
and isinstance(res, float)
and np.isnan(targ)
and np.isnan(res)
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 5e51edfee17f1..93fe9b05adb4f 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -1570,8 +1570,8 @@ def test_convert_object_to_datetime_with_cache(
(Series([""] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
(Series([pd.NA] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
(Series([pd.NA] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
- (Series([np.NaN] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
- (Series([np.NaN] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
+ (Series([np.nan] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
+ (Series([np.nan] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
),
)
def test_to_datetime_converts_null_like_to_nat(self, cache, input, expected):
diff --git a/pandas/tests/util/test_assert_almost_equal.py b/pandas/tests/util/test_assert_almost_equal.py
index a86302f158005..8527efdbf7867 100644
--- a/pandas/tests/util/test_assert_almost_equal.py
+++ b/pandas/tests/util/test_assert_almost_equal.py
@@ -293,7 +293,7 @@ def test_assert_almost_equal_null():
_assert_almost_equal_both(None, None)
-@pytest.mark.parametrize("a,b", [(None, np.NaN), (None, 0), (np.NaN, 0)])
+@pytest.mark.parametrize("a,b", [(None, np.nan), (None, 0), (np.nan, 0)])
def test_assert_not_almost_equal_null(a, b):
_assert_not_almost_equal(a, b)
diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py
index 2dd4458172593..73ab470ab97a7 100644
--- a/pandas/tests/window/conftest.py
+++ b/pandas/tests/window/conftest.py
@@ -126,7 +126,7 @@ def series():
"""Make mocked series as fixture."""
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
series = Series(arr, index=bdate_range(datetime(2009, 1, 1), periods=100))
return series
diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py
index d901fe58950e3..33858e10afd75 100644
--- a/pandas/tests/window/test_api.py
+++ b/pandas/tests/window/test_api.py
@@ -223,9 +223,9 @@ def test_count_nonnumeric_types(step):
Period("2012-02"),
Period("2012-03"),
],
- "fl_inf": [1.0, 2.0, np.Inf],
- "fl_nan": [1.0, 2.0, np.NaN],
- "str_nan": ["aa", "bb", np.NaN],
+ "fl_inf": [1.0, 2.0, np.inf],
+ "fl_nan": [1.0, 2.0, np.nan],
+ "str_nan": ["aa", "bb", np.nan],
"dt_nat": dt_nat_col,
"periods_nat": [
Period("2012-01"),
diff --git a/pandas/tests/window/test_apply.py b/pandas/tests/window/test_apply.py
index 6af5a41e96e0a..4e4eca6e772e7 100644
--- a/pandas/tests/window/test_apply.py
+++ b/pandas/tests/window/test_apply.py
@@ -184,8 +184,8 @@ def numpysum(x, par):
def test_nans(raw):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).apply(f, raw=raw)
tm.assert_almost_equal(result.iloc[-1], np.mean(obj[10:-10]))
@@ -210,12 +210,12 @@ def test_nans(raw):
def test_center(raw):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(20, min_periods=15, center=True).apply(f, raw=raw)
expected = (
- concat([obj, Series([np.NaN] * 9)])
+ concat([obj, Series([np.nan] * 9)])
.rolling(20, min_periods=15)
.apply(f, raw=raw)
.iloc[9:]
diff --git a/pandas/tests/window/test_ewm.py b/pandas/tests/window/test_ewm.py
index 45d481fdd2e44..c5c395414b450 100644
--- a/pandas/tests/window/test_ewm.py
+++ b/pandas/tests/window/test_ewm.py
@@ -417,7 +417,7 @@ def test_ewm_alpha():
# GH 10789
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
s = Series(arr)
a = s.ewm(alpha=0.61722699889169674).mean()
@@ -433,7 +433,7 @@ def test_ewm_domain_checks():
# GH 12492
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
s = Series(arr)
msg = "comass must satisfy: comass >= 0"
@@ -484,8 +484,8 @@ def test_ew_empty_series(method):
def test_ew_min_periods(min_periods, name):
# excluding NaNs correctly
arr = np.random.default_rng(2).standard_normal(50)
- arr[:10] = np.NaN
- arr[-10:] = np.NaN
+ arr[:10] = np.nan
+ arr[-10:] = np.nan
s = Series(arr)
# check min_periods
@@ -515,11 +515,11 @@ def test_ew_min_periods(min_periods, name):
else:
# ewm.std, ewm.var with bias=False require at least
# two values
- tm.assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.nan]))
# pass in ints
result2 = getattr(Series(np.arange(50)).ewm(span=10), name)()
- assert result2.dtype == np.float_
+ assert result2.dtype == np.float64
@pytest.mark.parametrize("name", ["cov", "corr"])
@@ -527,8 +527,8 @@ def test_ewm_corr_cov(name):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
B = A[2:] + np.random.default_rng(2).standard_normal(48)
- A[:10] = np.NaN
- B.iloc[-10:] = np.NaN
+ A[:10] = np.nan
+ B.iloc[-10:] = np.nan
result = getattr(A.ewm(com=20, min_periods=5), name)(B)
assert np.isnan(result.values[:14]).all()
@@ -542,8 +542,8 @@ def test_ewm_corr_cov_min_periods(name, min_periods):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
B = A[2:] + np.random.default_rng(2).standard_normal(48)
- A[:10] = np.NaN
- B.iloc[-10:] = np.NaN
+ A[:10] = np.nan
+ B.iloc[-10:] = np.nan
result = getattr(A.ewm(com=20, min_periods=min_periods), name)(B)
# binary functions (ewmcov, ewmcorr) with bias=False require at
@@ -560,13 +560,13 @@ def test_ewm_corr_cov_min_periods(name, min_periods):
result = getattr(Series([1.0]).ewm(com=50, min_periods=min_periods), name)(
Series([1.0])
)
- tm.assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.nan]))
@pytest.mark.parametrize("name", ["cov", "corr"])
def test_different_input_array_raise_exception(name):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
- A[:10] = np.NaN
+ A[:10] = np.nan
msg = "other must be a DataFrame or Series"
# exception raised is Exception
diff --git a/pandas/tests/window/test_pairwise.py b/pandas/tests/window/test_pairwise.py
index 8dac6d271510a..b6f2365afb457 100644
--- a/pandas/tests/window/test_pairwise.py
+++ b/pandas/tests/window/test_pairwise.py
@@ -410,7 +410,7 @@ def test_cov_mulittindex(self):
expected = DataFrame(
np.vstack(
(
- np.full((8, 8), np.NaN),
+ np.full((8, 8), np.nan),
np.full((8, 8), 32.000000),
np.full((8, 8), 63.881919),
)
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index 4df20282bbfa6..70b7534b296f3 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -142,7 +142,7 @@ def test_constructor_timedelta_window_and_minperiods(window, raw):
index=date_range("2017-08-08", periods=n, freq="D"),
)
expected = DataFrame(
- {"value": np.append([np.NaN, 1.0], np.arange(3.0, 27.0, 3))},
+ {"value": np.append([np.nan, 1.0], np.arange(3.0, 27.0, 3))},
index=date_range("2017-08-08", periods=n, freq="D"),
)
result_roll_sum = df.rolling(window=window, min_periods=2).sum()
@@ -1461,15 +1461,15 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
0.03,
0.03,
0.001,
- np.NaN,
+ np.nan,
0.002,
0.008,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
0.005,
0.2,
]
@@ -1480,8 +1480,8 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
0.005,
0.005,
0.008,
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
0.005,
0.102500,
]
@@ -1495,7 +1495,7 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
def test_rolling_sum_all_nan_window_floating_artifacts():
# GH#41053
- df = DataFrame([0.002, 0.008, 0.005, np.NaN, np.NaN, np.NaN])
+ df = DataFrame([0.002, 0.008, 0.005, np.nan, np.nan, np.nan])
result = df.rolling(3, min_periods=0).sum()
expected = DataFrame([0.002, 0.010, 0.015, 0.013, 0.005, 0.0])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/window/test_rolling_functions.py b/pandas/tests/window/test_rolling_functions.py
index bc0b3e496038c..940f0845befa2 100644
--- a/pandas/tests/window/test_rolling_functions.py
+++ b/pandas/tests/window/test_rolling_functions.py
@@ -150,8 +150,8 @@ def test_time_rule_frame(raw, frame, compare_func, roll_func, kwargs, minp):
)
def test_nans(compare_func, roll_func, kwargs):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(50, min_periods=30), roll_func)(**kwargs)
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -177,8 +177,8 @@ def test_nans(compare_func, roll_func, kwargs):
def test_nans_count():
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).count()
tm.assert_almost_equal(
result.iloc[-1], np.isfinite(obj[10:-10]).astype(float).sum()
@@ -241,15 +241,15 @@ def test_min_periods_count(series, step):
)
def test_center(roll_func, kwargs, minp):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(20, min_periods=minp, center=True), roll_func)(
**kwargs
)
expected = (
getattr(
- concat([obj, Series([np.NaN] * 9)]).rolling(20, min_periods=minp), roll_func
+ concat([obj, Series([np.nan] * 9)]).rolling(20, min_periods=minp), roll_func
)(**kwargs)
.iloc[9:]
.reset_index(drop=True)
diff --git a/pandas/tests/window/test_rolling_quantile.py b/pandas/tests/window/test_rolling_quantile.py
index 32296ae3f2470..d5a7010923563 100644
--- a/pandas/tests/window/test_rolling_quantile.py
+++ b/pandas/tests/window/test_rolling_quantile.py
@@ -89,8 +89,8 @@ def test_time_rule_frame(raw, frame, q):
def test_nans(q):
compare_func = partial(scoreatpercentile, per=q)
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).quantile(q)
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -128,12 +128,12 @@ def test_min_periods(series, minp, q, step):
@pytest.mark.parametrize("q", [0.0, 0.1, 0.5, 0.9, 1.0])
def test_center(q):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(20, center=True).quantile(q)
expected = (
- concat([obj, Series([np.NaN] * 9)])
+ concat([obj, Series([np.nan] * 9)])
.rolling(20)
.quantile(q)
.iloc[9:]
diff --git a/pandas/tests/window/test_rolling_skew_kurt.py b/pandas/tests/window/test_rolling_skew_kurt.py
index ada726401c4a0..79c14f243e7cc 100644
--- a/pandas/tests/window/test_rolling_skew_kurt.py
+++ b/pandas/tests/window/test_rolling_skew_kurt.py
@@ -79,8 +79,8 @@ def test_nans(sp_func, roll_func):
compare_func = partial(getattr(sp_stats, sp_func), bias=False)
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(50, min_periods=30), roll_func)()
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -122,12 +122,12 @@ def test_min_periods(series, minp, roll_func, step):
@pytest.mark.parametrize("roll_func", ["kurt", "skew"])
def test_center(roll_func):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(20, center=True), roll_func)()
expected = (
- getattr(concat([obj, Series([np.NaN] * 9)]).rolling(20), roll_func)()
+ getattr(concat([obj, Series([np.nan] * 9)]).rolling(20), roll_func)()
.iloc[9:]
.reset_index(drop=True)
)
@@ -170,14 +170,14 @@ def test_center_reindex_frame(frame, roll_func):
def test_rolling_skew_edge_cases(step):
- expected = Series([np.NaN] * 4 + [0.0])[::step]
+ expected = Series([np.nan] * 4 + [0.0])[::step]
# yields all NaN (0 variance)
d = Series([1] * 5)
x = d.rolling(window=5, step=step).skew()
# index 4 should be 0 as it contains 5 same obs
tm.assert_series_equal(expected, x)
- expected = Series([np.NaN] * 5)[::step]
+ expected = Series([np.nan] * 5)[::step]
# yields all NaN (window too small)
d = Series(np.random.default_rng(2).standard_normal(5))
x = d.rolling(window=2, step=step).skew()
@@ -185,13 +185,13 @@ def test_rolling_skew_edge_cases(step):
# yields [NaN, NaN, NaN, 0.177994, 1.548824]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401])
- expected = Series([np.NaN, np.NaN, np.NaN, 0.177994, 1.548824])[::step]
+ expected = Series([np.nan, np.nan, np.nan, 0.177994, 1.548824])[::step]
x = d.rolling(window=4, step=step).skew()
tm.assert_series_equal(expected, x)
def test_rolling_kurt_edge_cases(step):
- expected = Series([np.NaN] * 4 + [-3.0])[::step]
+ expected = Series([np.nan] * 4 + [-3.0])[::step]
# yields all NaN (0 variance)
d = Series([1] * 5)
@@ -199,14 +199,14 @@ def test_rolling_kurt_edge_cases(step):
tm.assert_series_equal(expected, x)
# yields all NaN (window too small)
- expected = Series([np.NaN] * 5)[::step]
+ expected = Series([np.nan] * 5)[::step]
d = Series(np.random.default_rng(2).standard_normal(5))
x = d.rolling(window=3, step=step).kurt()
tm.assert_series_equal(expected, x)
# yields [NaN, NaN, NaN, 1.224307, 2.671499]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401])
- expected = Series([np.NaN, np.NaN, np.NaN, 1.224307, 2.671499])[::step]
+ expected = Series([np.nan, np.nan, np.nan, 1.224307, 2.671499])[::step]
x = d.rolling(window=4, step=step).kurt()
tm.assert_series_equal(expected, x)
diff --git a/pandas/tests/window/test_win_type.py b/pandas/tests/window/test_win_type.py
index 2ca02fef796ed..5052019ddb726 100644
--- a/pandas/tests/window/test_win_type.py
+++ b/pandas/tests/window/test_win_type.py
@@ -666,7 +666,7 @@ def test_weighted_var_big_window_no_segfault(win_types, center):
pytest.importorskip("scipy")
x = Series(0)
result = x.rolling(window=16, center=center, win_type=win_types).var()
- expected = Series(np.NaN)
+ expected = Series(np.nan)
tm.assert_series_equal(result, expected)
| Backport PR #54579: ENH: Reflect changes from `numpy` namespace refactor Part 3 | https://api.github.com/repos/pandas-dev/pandas/pulls/54583 | 2023-08-16T20:21:44Z | 2023-08-16T22:01:17Z | 2023-08-16T22:01:17Z | 2023-08-16T22:01:18Z |
REF: ujson cleanups | diff --git a/pandas/_libs/src/vendored/ujson/python/objToJSON.c b/pandas/_libs/src/vendored/ujson/python/objToJSON.c
index 1fa82215179a8..4a22de886742c 100644
--- a/pandas/_libs/src/vendored/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/vendored/ujson/python/objToJSON.c
@@ -1318,6 +1318,7 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
} else if (PyDate_Check(item) || PyDelta_Check(item)) {
is_datetimelike = 1;
if (PyObject_HasAttrString(item, "_value")) {
+ // pd.Timestamp object or pd.NaT
// see test_date_index_and_values for case with non-nano
i8date = get_long_attr(item, "_value");
} else {
@@ -1471,12 +1472,12 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
}
// Currently no way to pass longVal to iso function, so use
// state management
- GET_TC(tc)->longValue = longVal;
+ pc->longValue = longVal;
tc->type = JT_UTF8;
} else {
NPY_DATETIMEUNIT base =
((PyObjectEncoder *)tc->encoder)->datetimeUnit;
- GET_TC(tc)->longValue = NpyDateTimeToEpoch(longVal, base);
+ pc->longValue = NpyDateTimeToEpoch(longVal, base);
tc->type = JT_LONG;
}
}
@@ -1497,9 +1498,9 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
if (PyLong_Check(obj)) {
tc->type = JT_LONG;
int overflow = 0;
- GET_TC(tc)->longValue = PyLong_AsLongLongAndOverflow(obj, &overflow);
+ pc->longValue = PyLong_AsLongLongAndOverflow(obj, &overflow);
int err;
- err = (GET_TC(tc)->longValue == -1) && PyErr_Occurred();
+ err = (pc->longValue == -1) && PyErr_Occurred();
if (overflow) {
tc->type = JT_BIGNUM;
@@ -1513,7 +1514,7 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
if (npy_isnan(val) || npy_isinf(val)) {
tc->type = JT_NULL;
} else {
- GET_TC(tc)->doubleValue = val;
+ pc->doubleValue = val;
tc->type = JT_DOUBLE;
}
return;
@@ -1526,7 +1527,7 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
tc->type = JT_UTF8;
return;
} else if (object_is_decimal_type(obj)) {
- GET_TC(tc)->doubleValue = PyFloat_AsDouble(obj);
+ pc->doubleValue = PyFloat_AsDouble(obj);
tc->type = JT_DOUBLE;
return;
} else if (PyDateTime_Check(obj) || PyDate_Check(obj)) {
@@ -1541,7 +1542,7 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
} else {
NPY_DATETIMEUNIT base =
((PyObjectEncoder *)tc->encoder)->datetimeUnit;
- GET_TC(tc)->longValue = PyDateTimeToEpoch(obj, base);
+ pc->longValue = PyDateTimeToEpoch(obj, base);
tc->type = JT_LONG;
}
return;
@@ -1573,12 +1574,13 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
} else {
NPY_DATETIMEUNIT base =
((PyObjectEncoder *)tc->encoder)->datetimeUnit;
- GET_TC(tc)->longValue = PyDateTimeToEpoch(obj, base);
+ pc->longValue = PyDateTimeToEpoch(obj, base);
tc->type = JT_LONG;
}
return;
} else if (PyDelta_Check(obj)) {
if (PyObject_HasAttrString(obj, "_value")) {
+ // pd.Timedelta object or pd.NaT
value = get_long_attr(obj, "_value");
} else {
value = total_seconds(obj) * 1000000000LL; // nanoseconds per sec
@@ -1604,11 +1606,11 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
tc->type = JT_LONG;
}
- GET_TC(tc)->longValue = value;
+ pc->longValue = value;
return;
} else if (PyArray_IsScalar(obj, Integer)) {
tc->type = JT_LONG;
- PyArray_CastScalarToCtype(obj, &(GET_TC(tc)->longValue),
+ PyArray_CastScalarToCtype(obj, &(pc->longValue),
PyArray_DescrFromType(NPY_INT64));
exc = PyErr_Occurred();
@@ -1619,12 +1621,12 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
return;
} else if (PyArray_IsScalar(obj, Bool)) {
- PyArray_CastScalarToCtype(obj, &(GET_TC(tc)->longValue),
+ PyArray_CastScalarToCtype(obj, &(pc->longValue),
PyArray_DescrFromType(NPY_BOOL));
- tc->type = (GET_TC(tc)->longValue) ? JT_TRUE : JT_FALSE;
+ tc->type = (pc->longValue) ? JT_TRUE : JT_FALSE;
return;
} else if (PyArray_IsScalar(obj, Float) || PyArray_IsScalar(obj, Double)) {
- PyArray_CastScalarToCtype(obj, &(GET_TC(tc)->doubleValue),
+ PyArray_CastScalarToCtype(obj, &(pc->doubleValue),
PyArray_DescrFromType(NPY_DOUBLE));
tc->type = JT_DOUBLE;
return;
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 0bc335a9b75b6..74cbe90acdae8 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -2,6 +2,7 @@
from collections import defaultdict
import datetime
+import json
from typing import (
TYPE_CHECKING,
Any,
@@ -10,8 +11,6 @@
overload,
)
-from pandas._libs import json
-
from pandas.io.excel._base import ExcelWriter
from pandas.io.excel._util import (
combine_kwargs,
@@ -257,7 +256,7 @@ def _process_style(self, style: dict[str, Any] | None) -> str | None:
if style is None:
return None
- style_key = json.ujson_dumps(style)
+ style_key = json.dumps(style)
if style_key in self._style_dict:
return self._style_dict[style_key]
name = f"pd{len(self._style_dict)+1}"
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index afa988a5eda51..6eacac8c064fb 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -1,12 +1,11 @@
from __future__ import annotations
+import json
from typing import (
TYPE_CHECKING,
Any,
)
-from pandas._libs import json
-
from pandas.io.excel._base import ExcelWriter
from pandas.io.excel._util import (
combine_kwargs,
@@ -262,7 +261,7 @@ def _write_cells(
for cell in cells:
val, fmt = self._value_with_fmt(cell.val)
- stylekey = json.ujson_dumps(cell.style)
+ stylekey = json.dumps(cell.style)
if fmt:
stylekey += fmt
diff --git a/pandas/io/json/__init__.py b/pandas/io/json/__init__.py
index ff19cf6e9d4cc..8f4e7a62834b5 100644
--- a/pandas/io/json/__init__.py
+++ b/pandas/io/json/__init__.py
@@ -1,14 +1,14 @@
from pandas.io.json._json import (
read_json,
to_json,
- ujson_dumps as dumps,
- ujson_loads as loads,
+ ujson_dumps,
+ ujson_loads,
)
from pandas.io.json._table_schema import build_table_schema
__all__ = [
- "dumps",
- "loads",
+ "ujson_dumps",
+ "ujson_loads",
"read_json",
"to_json",
"build_table_schema",
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 637d62b98a831..ff9b4acd96499 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -28,6 +28,8 @@
StringArray,
)
+from pandas.io.json import ujson_dumps
+
def test_literal_json_deprecation():
# PR 53409
@@ -865,14 +867,13 @@ def test_date_index_and_values(self, date_format, as_object, date_typ):
)
def test_convert_dates_infer(self, infer_word):
# GH10747
- from pandas.io.json import dumps
data = [{"id": 1, infer_word: 1036713600000}, {"id": 2}]
expected = DataFrame(
[[1, Timestamp("2002-11-08")], [2, pd.NaT]], columns=["id", infer_word]
)
- result = read_json(StringIO(dumps(data)))[["id", infer_word]]
+ result = read_json(StringIO(ujson_dumps(data)))[["id", infer_word]]
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
@@ -1133,8 +1134,6 @@ def test_default_handler(self):
tm.assert_frame_equal(expected, result, check_index_type=False)
def test_default_handler_indirect(self):
- from pandas.io.json import dumps
-
def default(obj):
if isinstance(obj, complex):
return [("mathjs", "Complex"), ("re", obj.real), ("im", obj.imag)]
@@ -1151,7 +1150,9 @@ def default(obj):
'[9,[[1,null],["STR",null],[[["mathjs","Complex"],'
'["re",4.0],["im",-5.0]],"N\\/A"]]]'
)
- assert dumps(df_list, default_handler=default, orient="values") == expected
+ assert (
+ ujson_dumps(df_list, default_handler=default, orient="values") == expected
+ )
def test_default_handler_numpy_unsupported_dtype(self):
# GH12554 to_json raises 'Unhandled numpy dtype 15'
@@ -1235,23 +1236,19 @@ def test_sparse(self):
],
)
def test_tz_is_utc(self, ts):
- from pandas.io.json import dumps
-
exp = '"2013-01-10T05:00:00.000Z"'
- assert dumps(ts, iso_dates=True) == exp
+ assert ujson_dumps(ts, iso_dates=True) == exp
dt = ts.to_pydatetime()
- assert dumps(dt, iso_dates=True) == exp
+ assert ujson_dumps(dt, iso_dates=True) == exp
def test_tz_is_naive(self):
- from pandas.io.json import dumps
-
ts = Timestamp("2013-01-10 05:00:00")
exp = '"2013-01-10T05:00:00.000"'
- assert dumps(ts, iso_dates=True) == exp
+ assert ujson_dumps(ts, iso_dates=True) == exp
dt = ts.to_pydatetime()
- assert dumps(dt, iso_dates=True) == exp
+ assert ujson_dumps(dt, iso_dates=True) == exp
@pytest.mark.parametrize(
"tz_range",
@@ -1262,8 +1259,6 @@ def test_tz_is_naive(self):
],
)
def test_tz_range_is_utc(self, tz_range):
- from pandas.io.json import dumps
-
exp = '["2013-01-01T05:00:00.000Z","2013-01-02T05:00:00.000Z"]'
dfexp = (
'{"DT":{'
@@ -1271,20 +1266,18 @@ def test_tz_range_is_utc(self, tz_range):
'"1":"2013-01-02T05:00:00.000Z"}}'
)
- assert dumps(tz_range, iso_dates=True) == exp
+ assert ujson_dumps(tz_range, iso_dates=True) == exp
dti = DatetimeIndex(tz_range)
# Ensure datetimes in object array are serialized correctly
# in addition to the normal DTI case
- assert dumps(dti, iso_dates=True) == exp
- assert dumps(dti.astype(object), iso_dates=True) == exp
+ assert ujson_dumps(dti, iso_dates=True) == exp
+ assert ujson_dumps(dti.astype(object), iso_dates=True) == exp
df = DataFrame({"DT": dti})
- result = dumps(df, iso_dates=True)
+ result = ujson_dumps(df, iso_dates=True)
assert result == dfexp
- assert dumps(df.astype({"DT": object}), iso_dates=True)
+ assert ujson_dumps(df.astype({"DT": object}), iso_dates=True)
def test_tz_range_is_naive(self):
- from pandas.io.json import dumps
-
dti = pd.date_range("2013-01-01 05:00:00", periods=2)
exp = '["2013-01-01T05:00:00.000","2013-01-02T05:00:00.000"]'
@@ -1292,12 +1285,12 @@ def test_tz_range_is_naive(self):
# Ensure datetimes in object array are serialized correctly
# in addition to the normal DTI case
- assert dumps(dti, iso_dates=True) == exp
- assert dumps(dti.astype(object), iso_dates=True) == exp
+ assert ujson_dumps(dti, iso_dates=True) == exp
+ assert ujson_dumps(dti.astype(object), iso_dates=True) == exp
df = DataFrame({"DT": dti})
- result = dumps(df, iso_dates=True)
+ result = ujson_dumps(df, iso_dates=True)
assert result == dfexp
- assert dumps(df.astype({"DT": object}), iso_dates=True)
+ assert ujson_dumps(df.astype({"DT": object}), iso_dates=True)
def test_read_inline_jsonl(self):
# GH9180
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54581 | 2023-08-16T17:40:29Z | 2023-08-17T15:37:59Z | 2023-08-17T15:37:59Z | 2024-01-23T15:46:36Z |
ENH: Reflect changes from `numpy` namespace refactor Part 3 | diff --git a/asv_bench/benchmarks/algos/isin.py b/asv_bench/benchmarks/algos/isin.py
index ac79ab65cea81..92797425b2c30 100644
--- a/asv_bench/benchmarks/algos/isin.py
+++ b/asv_bench/benchmarks/algos/isin.py
@@ -247,7 +247,7 @@ def setup(self, series_type, vals_type):
elif series_type == "long":
ser_vals = np.arange(N_many)
elif series_type == "long_floats":
- ser_vals = np.arange(N_many, dtype=np.float_)
+ ser_vals = np.arange(N_many, dtype=np.float64)
self.series = Series(ser_vals).astype(object)
@@ -258,7 +258,7 @@ def setup(self, series_type, vals_type):
elif vals_type == "long":
values = np.arange(N_many)
elif vals_type == "long_floats":
- values = np.arange(N_many, dtype=np.float_)
+ values = np.arange(N_many, dtype=np.float64)
self.values = values.astype(object)
diff --git a/doc/source/getting_started/comparison/comparison_with_sql.rst b/doc/source/getting_started/comparison/comparison_with_sql.rst
index 7a83d50416186..f0eaa7362c52c 100644
--- a/doc/source/getting_started/comparison/comparison_with_sql.rst
+++ b/doc/source/getting_started/comparison/comparison_with_sql.rst
@@ -107,7 +107,7 @@ methods.
.. ipython:: python
frame = pd.DataFrame(
- {"col1": ["A", "B", np.NaN, "C", "D"], "col2": ["F", np.NaN, "G", "H", "I"]}
+ {"col1": ["A", "B", np.nan, "C", "D"], "col2": ["F", np.nan, "G", "H", "I"]}
)
frame
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index 2ddc3e709be85..bc2f4420da784 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -183,8 +183,8 @@ can be improved by passing an ``np.ndarray``.
...: return s * dx
...: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_b,
...: np.ndarray col_N):
- ...: assert (col_a.dtype == np.float_
- ...: and col_b.dtype == np.float_ and col_N.dtype == np.int_)
+ ...: assert (col_a.dtype == np.float64
+ ...: and col_b.dtype == np.float64 and col_N.dtype == np.int_)
...: cdef Py_ssize_t i, n = len(col_N)
...: assert (len(col_a) == len(col_b) == n)
...: cdef np.ndarray[double] res = np.empty(n)
diff --git a/doc/source/user_guide/gotchas.rst b/doc/source/user_guide/gotchas.rst
index 67106df328361..c00a236ff4e9d 100644
--- a/doc/source/user_guide/gotchas.rst
+++ b/doc/source/user_guide/gotchas.rst
@@ -327,7 +327,7 @@ present in the more domain-specific statistical programming language `R
``numpy.unsignedinteger`` | ``uint8, uint16, uint32, uint64``
``numpy.object_`` | ``object_``
``numpy.bool_`` | ``bool_``
- ``numpy.character`` | ``string_, unicode_``
+ ``numpy.character`` | ``bytes_, str_``
The R language, by contrast, only has a handful of built-in data types:
``integer``, ``numeric`` (floating-point), ``character``, and
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 6e352c52cd60e..df2f1bccc3cff 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -4881,7 +4881,7 @@ unspecified columns of the given DataFrame. The argument ``selector``
defines which table is the selector table (which you can make queries from).
The argument ``dropna`` will drop rows from the input ``DataFrame`` to ensure
tables are synchronized. This means that if a row for one of the tables
-being written to is entirely ``np.NaN``, that row will be dropped from all tables.
+being written to is entirely ``np.nan``, that row will be dropped from all tables.
If ``dropna`` is False, **THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES**.
Remember that entirely ``np.Nan`` rows are not written to the HDFStore, so if
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 73a523b14f9f7..38c6e1123aaae 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -556,7 +556,7 @@ You must pass in the ``line_terminator`` explicitly, even in this case.
.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
-Proper handling of ``np.NaN`` in a string data-typed column with the Python engine
+Proper handling of ``np.nan`` in a string data-typed column with the Python engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There was bug in :func:`read_excel` and :func:`read_csv` with the Python
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 0b6ea58f987d4..9eed70a23c9dd 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -59,7 +59,7 @@ from pandas._libs.util cimport get_nat
cdef:
float64_t FP_ERR = 1e-13
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
int64_t NPY_NAT = get_nat()
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 20499016f951e..7635b261d4149 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -52,7 +52,7 @@ from pandas._libs.missing cimport checknull
cdef int64_t NPY_NAT = util.get_nat()
-cdef float64_t NaN = <float64_t>np.NaN
+cdef float64_t NaN = <float64_t>np.nan
cdef enum InterpolationEnumType:
INTERPOLATION_LINEAR,
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index a96152ccdf3cc..2681115bbdcfb 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -144,7 +144,7 @@ cdef:
object oINT64_MIN = <int64_t>INT64_MIN
object oUINT64_MAX = <uint64_t>UINT64_MAX
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
# python-visible
i8max = <int64_t>INT64_MAX
diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd
index e25e7e8b94e1d..519d3fc939efa 100644
--- a/pandas/_libs/tslibs/util.pxd
+++ b/pandas/_libs/tslibs/util.pxd
@@ -75,7 +75,7 @@ cdef inline bint is_integer_object(object obj) noexcept nogil:
cdef inline bint is_float_object(object obj) noexcept nogil:
"""
- Cython equivalent of `isinstance(val, (float, np.float_))`
+ Cython equivalent of `isinstance(val, (float, np.float64))`
Parameters
----------
@@ -91,7 +91,7 @@ cdef inline bint is_float_object(object obj) noexcept nogil:
cdef inline bint is_complex_object(object obj) noexcept nogil:
"""
- Cython equivalent of `isinstance(val, (complex, np.complex_))`
+ Cython equivalent of `isinstance(val, (complex, np.complex128))`
Parameters
----------
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 425c5ade2e2d4..9c151b8269a52 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -57,7 +57,7 @@ cdef:
float32_t MAXfloat32 = np.inf
float64_t MAXfloat64 = np.inf
- float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.nan
cdef bint is_monotonic_increasing_start_end_bounds(
ndarray[int64_t, ndim=1] start, ndarray[int64_t, ndim=1] end
diff --git a/pandas/conftest.py b/pandas/conftest.py
index f756da82157b8..757ca817d1b85 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -777,7 +777,7 @@ def series_with_multilevel_index() -> Series:
index = MultiIndex.from_tuples(tuples)
data = np.random.default_rng(2).standard_normal(8)
ser = Series(data, index=index)
- ser.iloc[3] = np.NaN
+ ser.iloc[3] = np.nan
return ser
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6107388bfe78b..aefc94ebd665c 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2109,7 +2109,7 @@ def _codes(self) -> np.ndarray:
def _box_func(self, i: int):
if i == -1:
- return np.NaN
+ return np.nan
return self.categories[i]
def _unbox_scalar(self, key) -> int:
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index 9050fb6e76b9c..852bfae1cc79a 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -537,8 +537,8 @@ def __init__(self, lhs, rhs) -> None:
)
# do not upcast float32s to float64 un-necessarily
- acceptable_dtypes = [np.float32, np.float_]
- _cast_inplace(com.flatten(self), acceptable_dtypes, np.float_)
+ acceptable_dtypes = [np.float32, np.float64]
+ _cast_inplace(com.flatten(self), acceptable_dtypes, np.float64)
UNARY_OPS_SYMS = ("+", "-", "~", "not")
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 9f7c0b3e36032..657cbce40087a 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -850,7 +850,7 @@ def infer_dtype_from_scalar(val) -> tuple[DtypeObj, Any]:
dtype = np.dtype(np.float64)
elif is_complex(val):
- dtype = np.dtype(np.complex_)
+ dtype = np.dtype(np.complex128)
if lib.is_period(val):
dtype = PeriodDtype(freq=val.freq)
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index a0feb49f47c4e..c2e498e75b7d3 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1351,7 +1351,7 @@ def is_complex_dtype(arr_or_dtype) -> bool:
False
>>> is_complex_dtype(int)
False
- >>> is_complex_dtype(np.complex_)
+ >>> is_complex_dtype(np.complex128)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index be0d046697ba9..954573febed41 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5307,7 +5307,7 @@ def reindex(
level : int or name
Broadcast across a level, matching Index values on the
passed MultiIndex level.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value.
limit : int, default None
@@ -7376,7 +7376,7 @@ def ffill(
2 3.0 4.0 NaN 1.0
3 3.0 3.0 NaN 4.0
- >>> ser = pd.Series([1, np.NaN, 2, 3])
+ >>> ser = pd.Series([1, np.nan, 2, 3])
>>> ser.ffill()
0 1.0
1 1.0
@@ -8375,7 +8375,7 @@ def isna(self) -> Self:
--------
Show which entries in a DataFrame are NA.
- >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
+ >>> df = pd.DataFrame(dict(age=[5, 6, np.nan],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
@@ -8394,7 +8394,7 @@ def isna(self) -> Self:
Show which entries in a Series are NA.
- >>> ser = pd.Series([5, 6, np.NaN])
+ >>> ser = pd.Series([5, 6, np.nan])
>>> ser
0 5.0
1 6.0
@@ -8442,7 +8442,7 @@ def notna(self) -> Self:
--------
Show which entries in a DataFrame are not NA.
- >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
+ >>> df = pd.DataFrame(dict(age=[5, 6, np.nan],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
@@ -8461,7 +8461,7 @@ def notna(self) -> Self:
Show which entries in a Series are not NA.
- >>> ser = pd.Series([5, 6, np.NaN])
+ >>> ser = pd.Series([5, 6, np.nan])
>>> ser
0 5.0
1 6.0
@@ -8628,7 +8628,7 @@ def clip(
Clips using specific lower threshold per column element, with missing values:
- >>> t = pd.Series([2, -4, np.NaN, 6, 3])
+ >>> t = pd.Series([2, -4, np.nan, 6, 3])
>>> t
0 2.0
1 -4.0
@@ -9828,7 +9828,7 @@ def align(
copy : bool, default True
Always returns new objects. If copy=False and no reindexing is
required then original objects are returned.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value.
method : {{'backfill', 'bfill', 'pad', 'ffill', None}}, default None
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index e327dd9d6c5ff..5a7f42a535951 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -5418,7 +5418,7 @@ def _mask_selected_obj(self, mask: npt.NDArray[np.bool_]) -> NDFrameT:
def _reindex_output(
self,
output: OutputFrameOrSeries,
- fill_value: Scalar = np.NaN,
+ fill_value: Scalar = np.nan,
qs: npt.NDArray[np.float64] | None = None,
) -> OutputFrameOrSeries:
"""
@@ -5436,7 +5436,7 @@ def _reindex_output(
----------
output : Series or DataFrame
Object resulting from grouping and applying an operation.
- fill_value : scalar, default np.NaN
+ fill_value : scalar, default np.nan
Value to use for unobserved categories if self.observed is False.
qs : np.ndarray[float64] or None, default None
quantile values, only relevant for quantile.
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index dbb7cb97d1d6f..5854342cdcf13 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2854,7 +2854,7 @@ def isna(self) -> npt.NDArray[np.bool_]:
Show which entries in a pandas.Index are NA. The result is an
array.
- >>> idx = pd.Index([5.2, 6.0, np.NaN])
+ >>> idx = pd.Index([5.2, 6.0, np.nan])
>>> idx
Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
@@ -2910,7 +2910,7 @@ def notna(self) -> npt.NDArray[np.bool_]:
Show which entries in an Index are not NA. The result is an
array.
- >>> idx = pd.Index([5.2, 6.0, np.NaN])
+ >>> idx = pd.Index([5.2, 6.0, np.nan])
>>> idx
Index([5.2, 6.0, nan], dtype='float64')
>>> idx.notna()
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 302d8fdb353fd..b36672df32e61 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -125,7 +125,7 @@ def _get_next_label(label):
elif is_integer_dtype(dtype):
return label + 1
elif is_float_dtype(dtype):
- return np.nextafter(label, np.infty)
+ return np.nextafter(label, np.inf)
else:
raise TypeError(f"cannot determine next label for type {repr(type(label))}")
@@ -142,7 +142,7 @@ def _get_prev_label(label):
elif is_integer_dtype(dtype):
return label - 1
elif is_float_dtype(dtype):
- return np.nextafter(label, -np.infty)
+ return np.nextafter(label, -np.inf)
else:
raise TypeError(f"cannot determine next label for type {repr(type(label))}")
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 020fadc359ebd..d7258cd1cf4b2 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5586,7 +5586,7 @@ def dropna(
Empty strings are not considered NA values. ``None`` is considered an
NA value.
- >>> ser = pd.Series([np.NaN, 2, pd.NaT, '', None, 'I stay'])
+ >>> ser = pd.Series([np.nan, 2, pd.NaT, '', None, 'I stay'])
>>> ser
0 NaN
1 2
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index e59369db776da..becf9b47b3af1 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1215,7 +1215,7 @@ def contains(
--------
Returning a Series of booleans using only a literal pattern.
- >>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
+ >>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.nan])
>>> s1.str.contains('og', regex=False)
0 False
1 True
@@ -1226,7 +1226,7 @@ def contains(
Returning an Index of booleans using only a literal pattern.
- >>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
+ >>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.nan])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
@@ -3500,7 +3500,7 @@ def str_extractall(arr, pat, flags: int = 0) -> DataFrame:
for match_i, match_tuple in enumerate(regex.findall(subject)):
if isinstance(match_tuple, str):
match_tuple = (match_tuple,)
- na_tuple = [np.NaN if group == "" else group for group in match_tuple]
+ na_tuple = [np.nan if group == "" else group for group in match_tuple]
match_list.append(na_tuple)
result_key = tuple(subject_key + (match_i,))
index_list.append(result_key)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 9fe8cbfa159c6..ff26abd5cc26c 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1715,7 +1715,7 @@ def format_percentiles(
"""
percentiles = np.asarray(percentiles)
- # It checks for np.NaN as well
+ # It checks for np.nan as well
if (
not is_numeric_dtype(percentiles)
or not np.all(percentiles >= 0)
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index ba052c6936dd9..3a3f73a68374b 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -637,15 +637,15 @@ def test_apply_with_byte_string():
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("val", ["asd", 12, None, np.NaN])
+@pytest.mark.parametrize("val", ["asd", 12, None, np.nan])
def test_apply_category_equalness(val):
# Check if categorical comparisons on apply, GH 21239
- df_values = ["asd", None, 12, "asd", "cde", np.NaN]
+ df_values = ["asd", None, 12, "asd", "cde", np.nan]
df = DataFrame({"a": df_values}, dtype="category")
result = df.a.apply(lambda x: x == val)
expected = Series(
- [np.NaN if pd.isnull(x) else x == val for x in df_values], name="a"
+ [np.nan if pd.isnull(x) else x == val for x in df_values], name="a"
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py
index aea1e03dfe0ee..d3e5ac1b4ca7a 100644
--- a/pandas/tests/apply/test_series_apply.py
+++ b/pandas/tests/apply/test_series_apply.py
@@ -242,7 +242,7 @@ def test_apply_categorical(by_row):
assert result.dtype == object
-@pytest.mark.parametrize("series", [["1-1", "1-1", np.NaN], ["1-1", "1-2", np.NaN]])
+@pytest.mark.parametrize("series", [["1-1", "1-1", np.nan], ["1-1", "1-2", np.nan]])
def test_apply_categorical_with_nan_values(series, by_row):
# GH 20714 bug fixed in: GH 24275
s = Series(series, dtype="category")
@@ -254,7 +254,7 @@ def test_apply_categorical_with_nan_values(series, by_row):
result = s.apply(lambda x: x.split("-")[0], by_row=by_row)
result = result.astype(object)
- expected = Series(["1", "1", np.NaN], dtype="category")
+ expected = Series(["1", "1", np.nan], dtype="category")
expected = expected.astype(object)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/arrays/categorical/test_analytics.py b/pandas/tests/arrays/categorical/test_analytics.py
index c42364d4d4377..c2c53fbc4637e 100644
--- a/pandas/tests/arrays/categorical/test_analytics.py
+++ b/pandas/tests/arrays/categorical/test_analytics.py
@@ -73,8 +73,8 @@ def test_min_max_reduce(self):
@pytest.mark.parametrize(
"categories,expected",
[
- (list("ABC"), np.NaN),
- ([1, 2, 3], np.NaN),
+ (list("ABC"), np.nan),
+ ([1, 2, 3], np.nan),
pytest.param(
Series(date_range("2020-01-01", periods=3), dtype="category"),
NaT,
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index e16ef37e8799d..761b85287764f 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -129,7 +129,7 @@ def test_set_na(self, left_right_dtypes):
# GH#45484 TypeError, not ValueError, matches what we get with
# non-NA un-holdable value.
with pytest.raises(TypeError, match=msg):
- result[0] = np.NaN
+ result[0] = np.nan
return
result[0] = np.nan
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index f958d25e51103..9c630e29ea8e6 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -725,7 +725,7 @@ def test_and_logic_string_match(self):
class TestTypeCasting:
@pytest.mark.parametrize("op", ["+", "-", "*", "**", "/"])
# maybe someday... numexpr has too many upcasting rules now
- # chain(*(np.sctypes[x] for x in ['uint', 'int', 'float']))
+ # chain(*(np.core.sctypes[x] for x in ['uint', 'int', 'float']))
@pytest.mark.parametrize("dt", [np.float32, np.float64])
@pytest.mark.parametrize("left_right", [("df", "3"), ("3", "df")])
def test_binop_typecasting(self, engine, parser, op, dt, left_right):
diff --git a/pandas/tests/dtypes/cast/test_infer_dtype.py b/pandas/tests/dtypes/cast/test_infer_dtype.py
index b5d761b3549fa..ed08df74461ef 100644
--- a/pandas/tests/dtypes/cast/test_infer_dtype.py
+++ b/pandas/tests/dtypes/cast/test_infer_dtype.py
@@ -42,7 +42,7 @@ def test_infer_dtype_from_float_scalar(float_numpy_dtype):
@pytest.mark.parametrize(
- "data,exp_dtype", [(12, np.int64), (np.float_(12), np.float64)]
+ "data,exp_dtype", [(12, np.int64), (np.float64(12), np.float64)]
)
def test_infer_dtype_from_python_scalar(data, exp_dtype):
dtype, val = infer_dtype_from_scalar(data)
@@ -58,7 +58,7 @@ def test_infer_dtype_from_boolean(bool_val):
def test_infer_dtype_from_complex(complex_dtype):
data = np.dtype(complex_dtype).type(1)
dtype, val = infer_dtype_from_scalar(data)
- assert dtype == np.complex_
+ assert dtype == np.complex128
def test_infer_dtype_from_datetime():
@@ -153,7 +153,7 @@ def test_infer_dtype_from_scalar_errors():
("foo", np.object_),
(b"foo", np.object_),
(1, np.int64),
- (1.5, np.float_),
+ (1.5, np.float64),
(np.datetime64("2016-01-01"), np.dtype("M8[s]")),
(Timestamp("20160101"), np.dtype("M8[s]")),
(Timestamp("20160101", tz="UTC"), "datetime64[s, UTC]"),
@@ -173,7 +173,7 @@ def test_infer_dtype_from_scalar(value, expected):
([1], np.int_),
(np.array([1], dtype=np.int64), np.int64),
([np.nan, 1, ""], np.object_),
- (np.array([[1.0, 2.0]]), np.float_),
+ (np.array([[1.0, 2.0]]), np.float64),
(Categorical(list("aabc")), "category"),
(Categorical([1, 2, 3]), "category"),
(date_range("20160101", periods=3), np.dtype("=M8[ns]")),
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 0043ace1b9590..471e456146178 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -652,7 +652,7 @@ def test_is_complex_dtype():
assert not com.is_complex_dtype(pd.Series([1, 2]))
assert not com.is_complex_dtype(np.array(["a", "b"]))
- assert com.is_complex_dtype(np.complex_)
+ assert com.is_complex_dtype(np.complex128)
assert com.is_complex_dtype(complex)
assert com.is_complex_dtype(np.array([1 + 1j, 5]))
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 375003e58c21a..df7c787d2b9bf 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -536,7 +536,7 @@ def test_isneginf_scalar(self, value, expected):
)
def test_maybe_convert_nullable_boolean(self, convert_to_masked_nullable, exp):
# GH 40687
- arr = np.array([True, np.NaN], dtype=object)
+ arr = np.array([True, np.nan], dtype=object)
result = libops.maybe_convert_bool(
arr, set(), convert_to_masked_nullable=convert_to_masked_nullable
)
@@ -862,7 +862,7 @@ def test_maybe_convert_objects_timedelta64_nat(self):
)
def test_maybe_convert_objects_nullable_integer(self, exp):
# GH27335
- arr = np.array([2, np.NaN], dtype=object)
+ arr = np.array([2, np.nan], dtype=object)
result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
tm.assert_extension_array_equal(result, exp)
@@ -890,7 +890,7 @@ def test_maybe_convert_numeric_nullable_integer(
self, convert_to_masked_nullable, exp
):
# GH 40687
- arr = np.array([2, np.NaN], dtype=object)
+ arr = np.array([2, np.nan], dtype=object)
result = lib.maybe_convert_numeric(
arr, set(), convert_to_masked_nullable=convert_to_masked_nullable
)
@@ -1889,7 +1889,6 @@ def test_is_scalar_numpy_array_scalars(self):
assert is_scalar(np.complex64(2))
assert is_scalar(np.object_("foobar"))
assert is_scalar(np.str_("foobar"))
- assert is_scalar(np.unicode_("foobar"))
assert is_scalar(np.bytes_(b"foobar"))
assert is_scalar(np.datetime64("2014-01-01"))
assert is_scalar(np.timedelta64(1, "h"))
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 170f4f49ba377..451ac2afd1d91 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -51,7 +51,7 @@
def test_notna_notnull(notna_f):
assert notna_f(1.0)
assert not notna_f(None)
- assert not notna_f(np.NaN)
+ assert not notna_f(np.nan)
msg = "use_inf_as_na option is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
@@ -112,7 +112,7 @@ def test_empty_object(self, shape):
def test_isna_isnull(self, isna_f):
assert not isna_f(1.0)
assert isna_f(None)
- assert isna_f(np.NaN)
+ assert isna_f(np.nan)
assert float("nan")
assert not isna_f(np.inf)
assert not isna_f(-np.inf)
@@ -156,7 +156,7 @@ def test_isna_lists(self):
tm.assert_numpy_array_equal(result, exp)
# GH20675
- result = isna([np.NaN, "world"])
+ result = isna([np.nan, "world"])
exp = np.array([True, False])
tm.assert_numpy_array_equal(result, exp)
diff --git a/pandas/tests/frame/constructors/test_from_records.py b/pandas/tests/frame/constructors/test_from_records.py
index 95f9f2ba4051e..59dca5055f170 100644
--- a/pandas/tests/frame/constructors/test_from_records.py
+++ b/pandas/tests/frame/constructors/test_from_records.py
@@ -269,7 +269,7 @@ def test_from_records_series_categorical_index(self):
series_of_dicts = Series([{"a": 1}, {"a": 2}, {"b": 3}], index=index)
frame = DataFrame.from_records(series_of_dicts, index=index)
expected = DataFrame(
- {"a": [1, 2, np.NaN], "b": [np.NaN, np.NaN, 3]}, index=index
+ {"a": [1, 2, np.nan], "b": [np.nan, np.nan, 3]}, index=index
)
tm.assert_frame_equal(frame, expected)
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 34cbebe1b3d3f..6590f10c6b967 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -164,7 +164,7 @@ def test_astype_str(self):
def test_astype_str_float(self):
# see GH#11302
- result = DataFrame([np.NaN]).astype(str)
+ result = DataFrame([np.nan]).astype(str)
expected = DataFrame(["nan"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_clip.py b/pandas/tests/frame/methods/test_clip.py
index 710978057460a..9bd032a0aefc4 100644
--- a/pandas/tests/frame/methods/test_clip.py
+++ b/pandas/tests/frame/methods/test_clip.py
@@ -166,7 +166,7 @@ def test_clip_with_na_args(self, float_frame):
# GH#40420
data = {"col_0": [9, -3, 0, -1, 5], "col_1": [-2, -7, 6, 8, -5]}
df = DataFrame(data)
- t = Series([2, -4, np.NaN, 6, 3])
+ t = Series([2, -4, np.nan, 6, 3])
result = df.clip(lower=t, axis=0)
expected = DataFrame({"col_0": [9, -3, 0, 6, 5], "col_1": [2, -4, 6, 8, 3]})
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_describe.py b/pandas/tests/frame/methods/test_describe.py
index e2f92a1e04cb5..f56a7896c753e 100644
--- a/pandas/tests/frame/methods/test_describe.py
+++ b/pandas/tests/frame/methods/test_describe.py
@@ -332,7 +332,7 @@ def test_describe_percentiles_integer_idx(self):
result = df.describe(percentiles=pct)
expected = DataFrame(
- {"x": [1.0, 1.0, np.NaN, 1.0, *(1.0 for _ in pct), 1.0]},
+ {"x": [1.0, 1.0, np.nan, 1.0, *(1.0 for _ in pct), 1.0]},
index=[
"count",
"mean",
diff --git a/pandas/tests/frame/methods/test_dropna.py b/pandas/tests/frame/methods/test_dropna.py
index 11edf665b5494..7899b4aeac3fd 100644
--- a/pandas/tests/frame/methods/test_dropna.py
+++ b/pandas/tests/frame/methods/test_dropna.py
@@ -231,7 +231,7 @@ def test_dropna_with_duplicate_columns(self):
def test_set_single_column_subset(self):
# GH 41021
- df = DataFrame({"A": [1, 2, 3], "B": list("abc"), "C": [4, np.NaN, 5]})
+ df = DataFrame({"A": [1, 2, 3], "B": list("abc"), "C": [4, np.nan, 5]})
expected = DataFrame(
{"A": [1, 3], "B": list("ac"), "C": [4.0, 5.0]}, index=[0, 2]
)
@@ -248,7 +248,7 @@ def test_single_column_not_present_in_axis(self):
def test_subset_is_nparray(self):
# GH 41021
- df = DataFrame({"A": [1, 2, np.NaN], "B": list("abc"), "C": [4, np.NaN, 5]})
+ df = DataFrame({"A": [1, 2, np.nan], "B": list("abc"), "C": [4, np.nan, 5]})
expected = DataFrame({"A": [1.0], "B": ["a"], "C": [4.0]})
result = df.dropna(subset=np.array(["A", "C"]))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_dtypes.py b/pandas/tests/frame/methods/test_dtypes.py
index 6f21bd4c4b438..4bdf16977dae6 100644
--- a/pandas/tests/frame/methods/test_dtypes.py
+++ b/pandas/tests/frame/methods/test_dtypes.py
@@ -62,15 +62,15 @@ def test_datetime_with_tz_dtypes(self):
def test_dtypes_are_correct_after_column_slice(self):
# GH6525
- df = DataFrame(index=range(5), columns=list("abc"), dtype=np.float_)
+ df = DataFrame(index=range(5), columns=list("abc"), dtype=np.float64)
tm.assert_series_equal(
df.dtypes,
- Series({"a": np.float_, "b": np.float_, "c": np.float_}),
+ Series({"a": np.float64, "b": np.float64, "c": np.float64}),
)
- tm.assert_series_equal(df.iloc[:, 2:].dtypes, Series({"c": np.float_}))
+ tm.assert_series_equal(df.iloc[:, 2:].dtypes, Series({"c": np.float64}))
tm.assert_series_equal(
df.dtypes,
- Series({"a": np.float_, "b": np.float_, "c": np.float_}),
+ Series({"a": np.float64, "b": np.float64, "c": np.float64}),
)
@pytest.mark.parametrize(
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 3203482ddf724..61e44b4e24c08 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -666,7 +666,7 @@ def test_replace_NA_with_None(self):
def test_replace_NAT_with_None(self):
# gh-45836
df = DataFrame([pd.NaT, pd.NaT])
- result = df.replace({pd.NaT: None, np.NaN: None})
+ result = df.replace({pd.NaT: None, np.nan: None})
expected = DataFrame([None, None])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_select_dtypes.py b/pandas/tests/frame/methods/test_select_dtypes.py
index 3bfb1af423bdd..67dd5b6217187 100644
--- a/pandas/tests/frame/methods/test_select_dtypes.py
+++ b/pandas/tests/frame/methods/test_select_dtypes.py
@@ -340,7 +340,7 @@ def test_select_dtypes_datetime_with_tz(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
- "dtype", [str, "str", np.string_, "S1", "unicode", np.unicode_, "U1"]
+ "dtype", [str, "str", np.bytes_, "S1", "unicode", np.str_, "U1"]
)
@pytest.mark.parametrize("arg", ["include", "exclude"])
def test_select_dtypes_str_raises(self, dtype, arg):
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index 35941e9f24a4e..808f0cff2485c 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -681,10 +681,10 @@ def test_shift_with_iterable_basic_functionality(self):
{
"a_0": [1, 2, 3],
"b_0": [4, 5, 6],
- "a_1": [np.NaN, 1.0, 2.0],
- "b_1": [np.NaN, 4.0, 5.0],
- "a_2": [np.NaN, np.NaN, 1.0],
- "b_2": [np.NaN, np.NaN, 4.0],
+ "a_1": [np.nan, 1.0, 2.0],
+ "b_1": [np.nan, 4.0, 5.0],
+ "a_2": [np.nan, np.nan, 1.0],
+ "b_2": [np.nan, np.nan, 4.0],
}
)
tm.assert_frame_equal(expected, shifted)
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 008d7a023576a..9e8d92e832d01 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -131,22 +131,22 @@ def test_constructor_with_convert(self):
df = DataFrame({"A": [None, 1]})
result = df["A"]
- expected = Series(np.asarray([np.nan, 1], np.float_), name="A")
+ expected = Series(np.asarray([np.nan, 1], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0, 2]})
result = df["A"]
- expected = Series(np.asarray([1.0, 2], np.float_), name="A")
+ expected = Series(np.asarray([1.0, 2], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, 3]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, 3.0]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, True]})
@@ -156,12 +156,12 @@ def test_constructor_with_convert(self):
df = DataFrame({"A": [1.0, None]})
result = df["A"]
- expected = Series(np.asarray([1.0, np.nan], np.float_), name="A")
+ expected = Series(np.asarray([1.0, np.nan], np.float64), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [1.0 + 2.0j, None]})
result = df["A"]
- expected = Series(np.asarray([1.0 + 2.0j, np.nan], np.complex_), name="A")
+ expected = Series(np.asarray([1.0 + 2.0j, np.nan], np.complex128), name="A")
tm.assert_series_equal(result, expected)
df = DataFrame({"A": [2.0, 1, True, None]})
@@ -343,9 +343,9 @@ def test_stale_cached_series_bug_473(self, using_copy_on_write):
Y["e"] = Y["e"].astype("object")
if using_copy_on_write:
with tm.raises_chained_assignment_error():
- Y["g"]["c"] = np.NaN
+ Y["g"]["c"] = np.nan
else:
- Y["g"]["c"] = np.NaN
+ Y["g"]["c"] = np.nan
repr(Y)
Y.sum()
Y["g"].sum()
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index a493084142f7b..c170704150383 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1781,8 +1781,6 @@ def test_constructor_empty_with_string_dtype(self):
tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_)
tm.assert_frame_equal(df, expected)
- df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_)
- tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype="U5")
tm.assert_frame_equal(df, expected)
@@ -1826,7 +1824,7 @@ def test_constructor_single_value(self):
def test_constructor_with_datetimes(self):
intname = np.dtype(np.int_).name
- floatname = np.dtype(np.float_).name
+ floatname = np.dtype(np.float64).name
objectname = np.dtype(np.object_).name
# single item
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index ab36934533beb..e7b6a0c0b39b0 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -134,7 +134,7 @@ def wrapper(x):
# all NA case
if has_skipna:
- all_na = frame * np.NaN
+ all_na = frame * np.nan
r0 = getattr(all_na, opname)(axis=0)
r1 = getattr(all_na, opname)(axis=1)
if opname in ["sum", "prod"]:
@@ -834,9 +834,9 @@ def test_sum_nanops_min_count(self):
@pytest.mark.parametrize(
"kwargs, expected_result",
[
- ({"axis": 1, "min_count": 2}, [3.2, 5.3, np.NaN]),
- ({"axis": 1, "min_count": 3}, [np.NaN, np.NaN, np.NaN]),
- ({"axis": 1, "skipna": False}, [3.2, 5.3, np.NaN]),
+ ({"axis": 1, "min_count": 2}, [3.2, 5.3, np.nan]),
+ ({"axis": 1, "min_count": 3}, [np.nan, np.nan, np.nan]),
+ ({"axis": 1, "skipna": False}, [3.2, 5.3, np.nan]),
],
)
def test_sum_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@@ -850,9 +850,9 @@ def test_sum_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@pytest.mark.parametrize(
"kwargs, expected_result",
[
- ({"axis": 1, "min_count": 2}, [2.0, 4.0, np.NaN]),
- ({"axis": 1, "min_count": 3}, [np.NaN, np.NaN, np.NaN]),
- ({"axis": 1, "skipna": False}, [2.0, 4.0, np.NaN]),
+ ({"axis": 1, "min_count": 2}, [2.0, 4.0, np.nan]),
+ ({"axis": 1, "min_count": 3}, [np.nan, np.nan, np.nan]),
+ ({"axis": 1, "skipna": False}, [2.0, 4.0, np.nan]),
],
)
def test_prod_nanops_dtype_min_count(self, float_type, kwargs, expected_result):
@@ -1189,7 +1189,7 @@ def wrapper(x):
f(axis=2)
# all NA case
- all_na = frame * np.NaN
+ all_na = frame * np.nan
r0 = getattr(all_na, opname)(axis=0)
r1 = getattr(all_na, opname)(axis=1)
if opname == "any":
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index cb8e8c5025e3b..c90b871d5d66f 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -72,7 +72,7 @@ def test_stack_mixed_level(self, future_stack):
def test_unstack_not_consolidated(self, using_array_manager):
# Gh#34708
- df = DataFrame({"x": [1, 2, np.NaN], "y": [3.0, 4, np.NaN]})
+ df = DataFrame({"x": [1, 2, np.nan], "y": [3.0, 4, np.nan]})
df2 = df[["x"]]
df2["y"] = df["y"]
if not using_array_manager:
@@ -584,7 +584,7 @@ def test_unstack_to_series(self, float_frame):
tm.assert_frame_equal(undo, float_frame)
# check NA handling
- data = DataFrame({"x": [1, 2, np.NaN], "y": [3.0, 4, np.NaN]})
+ data = DataFrame({"x": [1, 2, np.nan], "y": [3.0, 4, np.nan]})
data.index = Index(["a", "b", "c"])
result = data.unstack()
@@ -592,7 +592,7 @@ def test_unstack_to_series(self, float_frame):
levels=[["x", "y"], ["a", "b", "c"]],
codes=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]],
)
- expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx)
+ expected = Series([1, 2, np.nan, 3, 4, np.nan], index=midx)
tm.assert_series_equal(result, expected)
@@ -902,9 +902,9 @@ def cast(val):
def test_unstack_nan_index2(self):
# GH7403
df = DataFrame({"A": list("aaaabbbb"), "B": range(8), "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [
@@ -921,9 +921,9 @@ def test_unstack_nan_index2(self):
tm.assert_frame_equal(left, right)
df = DataFrame({"A": list("aaaabbbb"), "B": list(range(4)) * 2, "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[2, 1] = np.NaN
+ df.iloc[2, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [[2, np.nan], [0, 4], [1, 5], [np.nan, 6], [3, 7]]
@@ -935,9 +935,9 @@ def test_unstack_nan_index2(self):
tm.assert_frame_equal(left, right)
df = DataFrame({"A": list("aaaabbbb"), "B": list(range(4)) * 2, "C": range(8)})
- # Explicit cast to avoid implicit cast when setting to np.NaN
+ # Explicit cast to avoid implicit cast when setting to np.nan
df = df.astype({"B": "float"})
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack(0)
vals = [[3, np.nan], [0, 4], [1, 5], [2, 6], [np.nan, 7]]
@@ -958,7 +958,7 @@ def test_unstack_nan_index3(self, using_array_manager):
}
)
- df.iloc[3, 1] = np.NaN
+ df.iloc[3, 1] = np.nan
left = df.set_index(["A", "B"]).unstack()
vals = np.array([[3, 0, 1, 2, np.nan, 4], [np.nan, 5, 6, 7, 8, 9]])
@@ -1754,7 +1754,7 @@ def test_stack_mixed_dtype(self, multiindex_dataframe_random_data, future_stack)
result = df["foo"].stack(future_stack=future_stack).sort_index()
tm.assert_series_equal(stacked["foo"], result, check_names=False)
assert result.name is None
- assert stacked["bar"].dtype == np.float_
+ assert stacked["bar"].dtype == np.float64
def test_unstack_bug(self, future_stack):
df = DataFrame(
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index d0ae9eeed394f..68ce58ad23690 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -18,7 +18,7 @@
from pandas.tests.groupby import get_groupby_method_args
-def cartesian_product_for_groupers(result, args, names, fill_value=np.NaN):
+def cartesian_product_for_groupers(result, args, names, fill_value=np.nan):
"""Reindex to a cartesian production for the groupers,
preserving the nature (Categorical) of each grouper
"""
@@ -42,28 +42,28 @@ def f(a):
# These expected values can be used across several tests (i.e. they are
# the same for SeriesGroupBy and DataFrameGroupBy) but they should only be
# hardcoded in one place.
- "all": np.NaN,
- "any": np.NaN,
+ "all": np.nan,
+ "any": np.nan,
"count": 0,
- "corrwith": np.NaN,
- "first": np.NaN,
- "idxmax": np.NaN,
- "idxmin": np.NaN,
- "last": np.NaN,
- "max": np.NaN,
- "mean": np.NaN,
- "median": np.NaN,
- "min": np.NaN,
- "nth": np.NaN,
+ "corrwith": np.nan,
+ "first": np.nan,
+ "idxmax": np.nan,
+ "idxmin": np.nan,
+ "last": np.nan,
+ "max": np.nan,
+ "mean": np.nan,
+ "median": np.nan,
+ "min": np.nan,
+ "nth": np.nan,
"nunique": 0,
- "prod": np.NaN,
- "quantile": np.NaN,
- "sem": np.NaN,
+ "prod": np.nan,
+ "quantile": np.nan,
+ "sem": np.nan,
"size": 0,
- "skew": np.NaN,
- "std": np.NaN,
+ "skew": np.nan,
+ "std": np.nan,
"sum": 0,
- "var": np.NaN,
+ "var": np.nan,
}
@@ -1750,8 +1750,8 @@ def test_series_groupby_first_on_categorical_col_grouped_on_2_categoricals(
cat2 = Categorical([0, 1])
idx = MultiIndex.from_product([cat2, cat2], names=["a", "b"])
expected_dict = {
- "first": Series([0, np.NaN, np.NaN, 1], idx, name="c"),
- "last": Series([1, np.NaN, np.NaN, 0], idx, name="c"),
+ "first": Series([0, np.nan, np.nan, 1], idx, name="c"),
+ "last": Series([1, np.nan, np.nan, 0], idx, name="c"),
}
expected = expected_dict[func]
@@ -1775,8 +1775,8 @@ def test_df_groupby_first_on_categorical_col_grouped_on_2_categoricals(
cat2 = Categorical([0, 1])
idx = MultiIndex.from_product([cat2, cat2], names=["a", "b"])
expected_dict = {
- "first": Series([0, np.NaN, np.NaN, 1], idx, name="c"),
- "last": Series([1, np.NaN, np.NaN, 0], idx, name="c"),
+ "first": Series([0, np.nan, np.nan, 1], idx, name="c"),
+ "last": Series([1, np.nan, np.nan, 0], idx, name="c"),
}
expected = expected_dict[func].to_frame()
diff --git a/pandas/tests/groupby/test_counting.py b/pandas/tests/groupby/test_counting.py
index fd5018d05380c..6c27344ce3110 100644
--- a/pandas/tests/groupby/test_counting.py
+++ b/pandas/tests/groupby/test_counting.py
@@ -232,7 +232,7 @@ def test_count_with_only_nans_in_first_group(self):
def test_count_groupby_column_with_nan_in_groupby_column(self):
# https://github.com/pandas-dev/pandas/issues/32841
- df = DataFrame({"A": [1, 1, 1, 1, 1], "B": [5, 4, np.NaN, 3, 0]})
+ df = DataFrame({"A": [1, 1, 1, 1, 1], "B": [5, 4, np.nan, 3, 0]})
res = df.groupby(["B"]).count()
expected = DataFrame(
index=Index([0.0, 3.0, 4.0, 5.0], name="B"), data={"A": [1, 1, 1, 1]}
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index 26881bdd18274..5d85a0783e024 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -578,7 +578,7 @@ def test_rank_min_int():
result = df.groupby("grp").rank()
expected = DataFrame(
- {"int_col": [1.0, 2.0, 1.0], "datetimelike": [np.NaN, 1.0, np.NaN]}
+ {"int_col": [1.0, 2.0, 1.0], "datetimelike": [np.nan, 1.0, np.nan]}
)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/methods/test_astype.py b/pandas/tests/indexes/datetimes/methods/test_astype.py
index 94cf86b7fb9c5..d339639dc5def 100644
--- a/pandas/tests/indexes/datetimes/methods/test_astype.py
+++ b/pandas/tests/indexes/datetimes/methods/test_astype.py
@@ -20,7 +20,7 @@
class TestDatetimeIndex:
def test_astype(self):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], name="idx")
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], name="idx")
result = idx.astype(object)
expected = Index(
@@ -84,7 +84,7 @@ def test_astype_str_nat(self):
# GH 13149, GH 13209
# verify that we are returning NaT as a string (and not unicode)
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN])
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan])
result = idx.astype(str)
expected = Index(["2016-05-16", "NaT", "NaT", "NaT"], dtype=object)
tm.assert_index_equal(result, expected)
@@ -141,7 +141,7 @@ def test_astype_str_freq_and_tz(self):
def test_astype_datetime64(self):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], name="idx")
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], name="idx")
result = idx.astype("datetime64[ns]")
tm.assert_index_equal(result, idx)
@@ -151,7 +151,7 @@ def test_astype_datetime64(self):
tm.assert_index_equal(result, idx)
assert result is idx
- idx_tz = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], tz="EST", name="idx")
+ idx_tz = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan], tz="EST", name="idx")
msg = "Cannot use .astype to convert from timezone-aware"
with pytest.raises(TypeError, match=msg):
# dt64tz->dt64 deprecated
@@ -202,7 +202,7 @@ def test_astype_object_with_nat(self):
)
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
- idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN])
+ idx = DatetimeIndex(["2016-05-16", "NaT", NaT, np.nan])
msg = "Cannot cast DatetimeIndex to dtype"
if dtype == "datetime64":
msg = "Casting to unit-less dtype 'datetime64' is not supported"
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 6f3c83b999e94..09b06ecd5630d 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -538,8 +538,8 @@ def test_dti_tz_localize_ambiguous_nat(self, tz):
times = [
"11/06/2011 00:00",
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
"11/06/2011 02:00",
"11/06/2011 03:00",
]
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 49e8df2b71f22..aff4944e7bd55 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -247,12 +247,12 @@ def test_is_unique_interval(self, closed):
assert idx.is_unique is True
# unique NaN
- idx = IntervalIndex.from_tuples([(np.NaN, np.NaN)], closed=closed)
+ idx = IntervalIndex.from_tuples([(np.nan, np.nan)], closed=closed)
assert idx.is_unique is True
# non-unique NaN
idx = IntervalIndex.from_tuples(
- [(np.NaN, np.NaN), (np.NaN, np.NaN)], closed=closed
+ [(np.nan, np.nan), (np.nan, np.nan)], closed=closed
)
assert idx.is_unique is False
diff --git a/pandas/tests/indexes/multi/test_join.py b/pandas/tests/indexes/multi/test_join.py
index c5a3512113655..700af142958b3 100644
--- a/pandas/tests/indexes/multi/test_join.py
+++ b/pandas/tests/indexes/multi/test_join.py
@@ -217,7 +217,7 @@ def test_join_multi_with_nan():
)
df2 = DataFrame(
data={"col2": [2.1, 2.2]},
- index=MultiIndex.from_product([["A"], [np.NaN, 2.0]], names=["id1", "id2"]),
+ index=MultiIndex.from_product([["A"], [np.nan, 2.0]], names=["id1", "id2"]),
)
result = df1.join(df2)
expected = DataFrame(
diff --git a/pandas/tests/indexes/period/methods/test_astype.py b/pandas/tests/indexes/period/methods/test_astype.py
index 2a605d136175e..e54cd73a35f59 100644
--- a/pandas/tests/indexes/period/methods/test_astype.py
+++ b/pandas/tests/indexes/period/methods/test_astype.py
@@ -17,14 +17,14 @@ class TestPeriodIndexAsType:
@pytest.mark.parametrize("dtype", [float, "timedelta64", "timedelta64[ns]"])
def test_astype_raises(self, dtype):
# GH#13149, GH#13209
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq="D")
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D")
msg = "Cannot cast PeriodIndex to dtype"
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
def test_astype_conversion(self):
# GH#13149, GH#13209
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq="D", name="idx")
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D", name="idx")
result = idx.astype(object)
expected = Index(
diff --git a/pandas/tests/indexes/period/test_pickle.py b/pandas/tests/indexes/period/test_pickle.py
index 82f906d1e361f..cb981ab10064f 100644
--- a/pandas/tests/indexes/period/test_pickle.py
+++ b/pandas/tests/indexes/period/test_pickle.py
@@ -14,7 +14,7 @@
class TestPickle:
@pytest.mark.parametrize("freq", ["D", "M", "A"])
def test_pickle_round_trip(self, freq):
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq=freq)
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq=freq)
result = tm.round_trip_pickle(idx)
tm.assert_index_equal(result, idx)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index b3fb5a26ca63f..ffa0b115e34fb 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -473,7 +473,7 @@ def test_empty_fancy(self, index, dtype):
def test_empty_fancy_raises(self, index):
# DatetimeIndex is excluded, because it overrides getitem and should
# be tested separately.
- empty_farr = np.array([], dtype=np.float_)
+ empty_farr = np.array([], dtype=np.float64)
empty_index = type(index)([], dtype=index.dtype)
assert index[[]].identical(empty_index)
diff --git a/pandas/tests/indexes/timedeltas/methods/test_astype.py b/pandas/tests/indexes/timedeltas/methods/test_astype.py
index 9b17a8af59ac5..f69f0fd3d78e2 100644
--- a/pandas/tests/indexes/timedeltas/methods/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/methods/test_astype.py
@@ -45,7 +45,7 @@ def test_astype_object_with_nat(self):
def test_astype(self):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN], name="idx")
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan], name="idx")
result = idx.astype(object)
expected = Index(
@@ -78,7 +78,7 @@ def test_astype_uint(self):
def test_astype_timedelta64(self):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN])
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan])
msg = (
r"Cannot convert from timedelta64\[ns\] to timedelta64. "
@@ -98,7 +98,7 @@ def test_astype_timedelta64(self):
@pytest.mark.parametrize("dtype", [float, "datetime64", "datetime64[ns]"])
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
- idx = TimedeltaIndex([1e14, "NaT", NaT, np.NaN])
+ idx = TimedeltaIndex([1e14, "NaT", NaT, np.nan])
msg = "Cannot cast TimedeltaIndex to dtype"
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index abbf22a7fc70a..d0b6adfda0241 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -201,7 +201,7 @@ def test_column_types_consistent(self):
df = DataFrame(
data={
"channel": [1, 2, 3],
- "A": ["String 1", np.NaN, "String 2"],
+ "A": ["String 1", np.nan, "String 2"],
"B": [
Timestamp("2019-06-11 11:00:00"),
pd.NaT,
diff --git a/pandas/tests/interchange/test_impl.py b/pandas/tests/interchange/test_impl.py
index 2a65937a82200..8a25a2c1889f3 100644
--- a/pandas/tests/interchange/test_impl.py
+++ b/pandas/tests/interchange/test_impl.py
@@ -32,7 +32,7 @@ def string_data():
"234,3245.67",
"gSaf,qWer|Gre",
"asd3,4sad|",
- np.NaN,
+ np.nan,
]
}
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 97d9f13bd9e9e..b7108896f01ed 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -473,7 +473,7 @@ def test_set_change_dtype(self, mgr):
mgr2.iset(
mgr2.items.get_loc("quux"), np.random.default_rng(2).standard_normal(N)
)
- assert mgr2.iget(idx).dtype == np.float_
+ assert mgr2.iget(idx).dtype == np.float64
def test_copy(self, mgr):
cp = mgr.copy(deep=False)
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 32509a799fa69..c8e984a92f418 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -181,7 +181,7 @@ def test_to_csv_na_rep(self):
# see gh-11553
#
# Testing if NaN values are correctly represented in the index.
- df = DataFrame({"a": [0, np.NaN], "b": [0, 1], "c": [2, 3]})
+ df = DataFrame({"a": [0, np.nan], "b": [0, 1], "c": [2, 3]})
expected_rows = ["a,b,c", "0.0,0,2", "_,1,3"]
expected = tm.convert_rows_list_to_csv_str(expected_rows)
@@ -189,7 +189,7 @@ def test_to_csv_na_rep(self):
assert df.set_index(["a", "b"]).to_csv(na_rep="_") == expected
# now with an index containing only NaNs
- df = DataFrame({"a": np.NaN, "b": [0, 1], "c": [2, 3]})
+ df = DataFrame({"a": np.nan, "b": [0, 1], "c": [2, 3]})
expected_rows = ["a,b,c", "_,0,2", "_,1,3"]
expected = tm.convert_rows_list_to_csv_str(expected_rows)
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index eecacf29de872..c79fdd9145a6a 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -46,7 +46,7 @@
def test_read_csv_with_custom_date_parser(all_parsers):
# GH36111
def __custom_date_parser(time):
- time = time.astype(np.float_)
+ time = time.astype(np.float64)
time = time.astype(np.int_) # convert float seconds to int type
return pd.to_timedelta(time, unit="s")
@@ -86,7 +86,7 @@ def __custom_date_parser(time):
def test_read_csv_with_custom_date_parser_parse_dates_false(all_parsers):
# GH44366
def __custom_date_parser(time):
- time = time.astype(np.float_)
+ time = time.astype(np.float64)
time = time.astype(np.int_) # convert float seconds to int type
return pd.to_timedelta(time, unit="s")
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 3cfd86049588b..7459aa1df8f3e 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1475,9 +1475,9 @@ def test_stata_111(self, datapath):
df = read_stata(datapath("io", "data", "stata", "stata7_111.dta"))
original = DataFrame(
{
- "y": [1, 1, 1, 1, 1, 0, 0, np.NaN, 0, 0],
- "x": [1, 2, 1, 3, np.NaN, 4, 3, 5, 1, 6],
- "w": [2, np.NaN, 5, 2, 4, 4, 3, 1, 2, 3],
+ "y": [1, 1, 1, 1, 1, 0, 0, np.nan, 0, 0],
+ "x": [1, 2, 1, 3, np.nan, 4, 3, 5, 1, 6],
+ "w": [2, np.nan, 5, 2, 4, 4, 3, 1, 2, 3],
"z": ["a", "b", "c", "d", "e", "", "g", "h", "i", "j"],
}
)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index afe5b3c66a611..87892a81cef3d 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -867,7 +867,7 @@ def test_idxmin(self):
string_series = tm.makeStringSeries().rename("series")
# add some NaNs
- string_series[5:15] = np.NaN
+ string_series[5:15] = np.nan
# skipna or no
assert string_series[string_series.idxmin()] == string_series.min()
@@ -900,7 +900,7 @@ def test_idxmax(self):
string_series = tm.makeStringSeries().rename("series")
# add some NaNs
- string_series[5:15] = np.NaN
+ string_series[5:15] = np.nan
# skipna or no
assert string_series[string_series.idxmax()] == string_series.max()
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 58c5fc7269aee..55d78c516b6f3 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -99,7 +99,7 @@ def _check_stat_op(
f = getattr(Series, name)
# add some NaNs
- string_series_[5:15] = np.NaN
+ string_series_[5:15] = np.nan
# mean, idxmax, idxmin, min, and max are valid for dates
if name not in ["max", "min", "mean", "median", "std"]:
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 1d72e6d3970ca..dbda751e82113 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -478,7 +478,7 @@ def test_resample_how_method(unit):
)
s.index = s.index.as_unit(unit)
expected = Series(
- [11, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, 22],
+ [11, np.nan, np.nan, np.nan, np.nan, np.nan, 22],
index=DatetimeIndex(
[
Timestamp("2015-03-31 21:48:50"),
@@ -1356,7 +1356,7 @@ def test_resample_consistency(unit):
i30 = date_range("2002-02-02", periods=4, freq="30T").as_unit(unit)
s = Series(np.arange(4.0), index=i30)
- s.iloc[2] = np.NaN
+ s.iloc[2] = np.nan
# Upsample by factor 3 with reindex() and resample() methods:
i10 = date_range(i30[0], i30[-1], freq="10T").as_unit(unit)
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index 0ded7d7e6bfc5..7559a85de7a6b 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -793,7 +793,7 @@ def test_upsampling_ohlc(self, freq, period_mult, kind):
@pytest.mark.parametrize(
"freq, expected_values",
[
- ("1s", [3, np.NaN, 7, 11]),
+ ("1s", [3, np.nan, 7, 11]),
("2s", [3, (7 + 11) / 2]),
("3s", [(3 + 7) / 2, 11]),
],
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 869bf3ace9492..3efcd930af581 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -507,10 +507,10 @@ def test_concat_duplicate_indices_raise(self):
concat([df1, df2], axis=1)
-@pytest.mark.parametrize("dt", np.sctypes["float"])
-def test_concat_no_unnecessary_upcast(dt, frame_or_series):
+def test_concat_no_unnecessary_upcast(float_numpy_dtype, frame_or_series):
# GH 13247
dims = frame_or_series(dtype=object).ndim
+ dt = float_numpy_dtype
dfs = [
frame_or_series(np.array([1], dtype=dt, ndmin=dims)),
@@ -522,8 +522,8 @@ def test_concat_no_unnecessary_upcast(dt, frame_or_series):
@pytest.mark.parametrize("pdt", [Series, DataFrame])
-@pytest.mark.parametrize("dt", np.sctypes["int"])
-def test_concat_will_upcast(dt, pdt):
+def test_concat_will_upcast(pdt, any_signed_int_numpy_dtype):
+ dt = any_signed_int_numpy_dtype
dims = pdt().ndim
dfs = [
pdt(np.array([1], dtype=dt, ndmin=dims)),
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 43786ee15d138..46da18445e135 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1950,7 +1950,7 @@ def test_pivot_table_not_series(self):
result = df.pivot_table("col1", index="col3", columns="col2", aggfunc="sum")
expected = DataFrame(
- [[3, np.NaN, np.NaN], [np.NaN, 4, np.NaN], [np.NaN, np.NaN, 5]],
+ [[3, np.nan, np.nan], [np.nan, 4, np.nan], [np.nan, np.nan, 5]],
index=Index([1, 3, 9], name="col3"),
columns=Index(["C", "D", "E"], name="col2"),
)
@@ -2424,7 +2424,7 @@ def test_pivot_table_aggfunc_nunique_with_different_values(self):
],
names=(None, None, "b"),
)
- nparr = np.full((10, 10), np.NaN)
+ nparr = np.full((10, 10), np.nan)
np.fill_diagonal(nparr, 1.0)
expected = DataFrame(nparr, index=Index(range(10), name="a"), columns=columnval)
diff --git a/pandas/tests/scalar/test_na_scalar.py b/pandas/tests/scalar/test_na_scalar.py
index 213fa1791838d..287b7557f50f9 100644
--- a/pandas/tests/scalar/test_na_scalar.py
+++ b/pandas/tests/scalar/test_na_scalar.py
@@ -103,9 +103,9 @@ def test_comparison_ops(comparison_op, other):
False,
np.bool_(False),
np.int_(0),
- np.float_(0),
+ np.float64(0),
np.int_(-0),
- np.float_(-0),
+ np.float64(-0),
],
)
@pytest.mark.parametrize("asarray", [True, False])
@@ -123,7 +123,7 @@ def test_pow_special(value, asarray):
@pytest.mark.parametrize(
- "value", [1, 1.0, True, np.bool_(True), np.int_(1), np.float_(1)]
+ "value", [1, 1.0, True, np.bool_(True), np.int_(1), np.float64(1)]
)
@pytest.mark.parametrize("asarray", [True, False])
def test_rpow_special(value, asarray):
@@ -133,14 +133,14 @@ def test_rpow_special(value, asarray):
if asarray:
result = result[0]
- elif not isinstance(value, (np.float_, np.bool_, np.int_)):
+ elif not isinstance(value, (np.float64, np.bool_, np.int_)):
# this assertion isn't possible with asarray=True
assert isinstance(result, type(value))
assert result == value
-@pytest.mark.parametrize("value", [-1, -1.0, np.int_(-1), np.float_(-1)])
+@pytest.mark.parametrize("value", [-1, -1.0, np.int_(-1), np.float64(-1)])
@pytest.mark.parametrize("asarray", [True, False])
def test_rpow_minus_one(value, asarray):
if asarray:
diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py
index e7fea9aa597b8..dd810a31c25af 100644
--- a/pandas/tests/series/accessors/test_dt_accessor.py
+++ b/pandas/tests/series/accessors/test_dt_accessor.py
@@ -739,9 +739,9 @@ def test_dt_timetz_accessor(self, tz_naive_fixture):
"input_series, expected_output",
[
[["2020-01-01"], [[2020, 1, 3]]],
- [[pd.NaT], [[np.NaN, np.NaN, np.NaN]]],
+ [[pd.NaT], [[np.nan, np.nan, np.nan]]],
[["2019-12-31", "2019-12-29"], [[2020, 1, 2], [2019, 52, 7]]],
- [["2010-01-01", pd.NaT], [[2009, 53, 5], [np.NaN, np.NaN, np.NaN]]],
+ [["2010-01-01", pd.NaT], [[2009, 53, 5], [np.nan, np.nan, np.nan]]],
# see GH#36032
[["2016-01-08", "2016-01-04"], [[2016, 1, 5], [2016, 1, 1]]],
[["2016-01-07", "2016-01-01"], [[2016, 1, 4], [2015, 53, 5]]],
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 20f8dd1fc5b2a..7b857a487db78 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -196,9 +196,9 @@ def test_setitem_ambiguous_keyerror(indexer_sl):
def test_setitem(datetime_series):
- datetime_series[datetime_series.index[5]] = np.NaN
- datetime_series.iloc[[1, 2, 17]] = np.NaN
- datetime_series.iloc[6] = np.NaN
+ datetime_series[datetime_series.index[5]] = np.nan
+ datetime_series.iloc[[1, 2, 17]] = np.nan
+ datetime_series.iloc[6] = np.nan
assert np.isnan(datetime_series.iloc[6])
assert np.isnan(datetime_series.iloc[2])
datetime_series[np.isnan(datetime_series)] = 5
@@ -304,7 +304,7 @@ def test_underlying_data_conversion(using_copy_on_write):
def test_preserve_refs(datetime_series):
seq = datetime_series.iloc[[5, 10, 15]]
- seq.iloc[1] = np.NaN
+ seq.iloc[1] = np.nan
assert not np.isnan(datetime_series.iloc[10])
diff --git a/pandas/tests/series/methods/test_argsort.py b/pandas/tests/series/methods/test_argsort.py
index bd8b7b34bd402..5bcf42aad1db4 100644
--- a/pandas/tests/series/methods/test_argsort.py
+++ b/pandas/tests/series/methods/test_argsort.py
@@ -27,7 +27,7 @@ def test_argsort_numpy(self, datetime_series):
# with missing values
ts = ser.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
msg = "The behavior of Series.argsort in the presence of NA values"
with tm.assert_produces_warning(
diff --git a/pandas/tests/series/methods/test_asof.py b/pandas/tests/series/methods/test_asof.py
index d5f99f721d323..31c264d74d063 100644
--- a/pandas/tests/series/methods/test_asof.py
+++ b/pandas/tests/series/methods/test_asof.py
@@ -65,8 +65,8 @@ def test_scalar(self):
rng = date_range("1/1/1990", periods=N, freq="53s")
# Explicit cast to float avoid implicit cast when setting nan
ts = Series(np.arange(N), index=rng, dtype="float")
- ts.iloc[5:10] = np.NaN
- ts.iloc[15:20] = np.NaN
+ ts.iloc[5:10] = np.nan
+ ts.iloc[15:20] = np.nan
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
diff --git a/pandas/tests/series/methods/test_combine_first.py b/pandas/tests/series/methods/test_combine_first.py
index c7ca73da9ae66..d2d8eab1cb38b 100644
--- a/pandas/tests/series/methods/test_combine_first.py
+++ b/pandas/tests/series/methods/test_combine_first.py
@@ -36,7 +36,7 @@ def test_combine_first(self):
series = Series(values, index=tm.makeIntIndex(20))
series_copy = series * 2
- series_copy[::2] = np.NaN
+ series_copy[::2] = np.nan
# nothing used from the input
combined = series.combine_first(series_copy)
@@ -70,14 +70,14 @@ def test_combine_first(self):
tm.assert_series_equal(ser, result)
def test_combine_first_dt64(self):
- s0 = to_datetime(Series(["2010", np.NaN]))
- s1 = to_datetime(Series([np.NaN, "2011"]))
+ s0 = to_datetime(Series(["2010", np.nan]))
+ s1 = to_datetime(Series([np.nan, "2011"]))
rs = s0.combine_first(s1)
xp = to_datetime(Series(["2010", "2011"]))
tm.assert_series_equal(rs, xp)
- s0 = to_datetime(Series(["2010", np.NaN]))
- s1 = Series([np.NaN, "2011"])
+ s0 = to_datetime(Series(["2010", np.nan]))
+ s1 = Series([np.nan, "2011"])
rs = s0.combine_first(s1)
xp = Series([datetime(2010, 1, 1), "2011"], dtype="datetime64[ns]")
diff --git a/pandas/tests/series/methods/test_copy.py b/pandas/tests/series/methods/test_copy.py
index 5ebf45090d7b8..77600e0e7d293 100644
--- a/pandas/tests/series/methods/test_copy.py
+++ b/pandas/tests/series/methods/test_copy.py
@@ -27,7 +27,7 @@ def test_copy(self, deep, using_copy_on_write):
else:
assert not np.may_share_memory(ser.values, ser2.values)
- ser2[::2] = np.NaN
+ ser2[::2] = np.nan
if deep is not False or using_copy_on_write:
# Did not modify original Series
diff --git a/pandas/tests/series/methods/test_count.py b/pandas/tests/series/methods/test_count.py
index 90984a2e65cba..9ba163f347198 100644
--- a/pandas/tests/series/methods/test_count.py
+++ b/pandas/tests/series/methods/test_count.py
@@ -12,7 +12,7 @@ class TestSeriesCount:
def test_count(self, datetime_series):
assert datetime_series.count() == len(datetime_series)
- datetime_series[::2] = np.NaN
+ datetime_series[::2] = np.nan
assert datetime_series.count() == np.isfinite(datetime_series).sum()
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 7e4503be2ec47..96c2e1ba6d9bb 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -71,7 +71,7 @@ def test_drop_duplicates_no_duplicates(any_numpy_dtype, keep, values):
class TestSeriesDropDuplicates:
@pytest.fixture(
- params=["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"]
+ params=["int_", "uint", "float64", "str_", "timedelta64[h]", "datetime64[D]"]
)
def dtype(self, request):
return request.param
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index 96c3674541e6b..46bc14da59eb0 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -75,7 +75,7 @@ def test_fillna(self):
tm.assert_series_equal(ts, ts.fillna(method="ffill"))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
exp = Series([0.0, 1.0, 1.0, 3.0, 4.0], index=ts.index)
tm.assert_series_equal(ts.fillna(method="ffill"), exp)
@@ -881,7 +881,7 @@ def test_fillna_bug(self):
def test_ffill(self):
ts = Series([0.0, 1.0, 2.0, 3.0, 4.0], index=tm.makeDateIndex(5))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
tm.assert_series_equal(ts.ffill(), ts.fillna(method="ffill"))
def test_ffill_mixed_dtypes_without_missing_data(self):
@@ -892,7 +892,7 @@ def test_ffill_mixed_dtypes_without_missing_data(self):
def test_bfill(self):
ts = Series([0.0, 1.0, 2.0, 3.0, 4.0], index=tm.makeDateIndex(5))
- ts.iloc[2] = np.NaN
+ ts.iloc[2] = np.nan
tm.assert_series_equal(ts.bfill(), ts.fillna(method="bfill"))
def test_pad_nan(self):
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index a984cd16997aa..619690f400d98 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -94,7 +94,7 @@ def test_interpolate(self, datetime_series):
ts = Series(np.arange(len(datetime_series), dtype=float), datetime_series.index)
ts_copy = ts.copy()
- ts_copy[5:10] = np.NaN
+ ts_copy[5:10] = np.nan
linear_interp = ts_copy.interpolate(method="linear")
tm.assert_series_equal(linear_interp, ts)
@@ -104,7 +104,7 @@ def test_interpolate(self, datetime_series):
).astype(float)
ord_ts_copy = ord_ts.copy()
- ord_ts_copy[5:10] = np.NaN
+ ord_ts_copy[5:10] = np.nan
time_interp = ord_ts_copy.interpolate(method="time")
tm.assert_series_equal(time_interp, ord_ts)
@@ -112,7 +112,7 @@ def test_interpolate(self, datetime_series):
def test_interpolate_time_raises_for_non_timeseries(self):
# When method='time' is used on a non-TimeSeries that contains a null
# value, a ValueError should be raised.
- non_ts = Series([0, 1, 2, np.NaN])
+ non_ts = Series([0, 1, 2, np.nan])
msg = "time-weighted interpolation only works on Series.* with a DatetimeIndex"
with pytest.raises(ValueError, match=msg):
non_ts.interpolate(method="time")
diff --git a/pandas/tests/series/methods/test_map.py b/pandas/tests/series/methods/test_map.py
index 00d1ad99332e9..783e18e541ad8 100644
--- a/pandas/tests/series/methods/test_map.py
+++ b/pandas/tests/series/methods/test_map.py
@@ -104,7 +104,7 @@ def test_map_series_stringdtype(any_string_dtype):
@pytest.mark.parametrize(
"data, expected_dtype",
- [(["1-1", "1-1", np.NaN], "category"), (["1-1", "1-2", np.NaN], object)],
+ [(["1-1", "1-1", np.nan], "category"), (["1-1", "1-2", np.nan], object)],
)
def test_map_categorical_with_nan_values(data, expected_dtype):
# GH 20714 bug fixed in: GH 24275
@@ -114,7 +114,7 @@ def func(val):
s = Series(data, dtype="category")
result = s.map(func, na_action="ignore")
- expected = Series(["1", "1", np.NaN], dtype=expected_dtype)
+ expected = Series(["1", "1", np.nan], dtype=expected_dtype)
tm.assert_series_equal(result, expected)
@@ -229,11 +229,11 @@ def test_map_int():
left = Series({"a": 1.0, "b": 2.0, "c": 3.0, "d": 4})
right = Series({1: 11, 2: 22, 3: 33})
- assert left.dtype == np.float_
+ assert left.dtype == np.float64
assert issubclass(right.dtype.type, np.integer)
merged = left.map(right)
- assert merged.dtype == np.float_
+ assert merged.dtype == np.float64
assert isna(merged["d"])
assert not isna(merged["c"])
diff --git a/pandas/tests/series/methods/test_pct_change.py b/pandas/tests/series/methods/test_pct_change.py
index 38a42062b275e..4dabf7b87e2cd 100644
--- a/pandas/tests/series/methods/test_pct_change.py
+++ b/pandas/tests/series/methods/test_pct_change.py
@@ -40,7 +40,7 @@ def test_pct_change_with_duplicate_axis(self):
result = Series(range(5), common_idx).pct_change(freq="B")
# the reason that the expected should be like this is documented at PR 28681
- expected = Series([np.NaN, np.inf, np.NaN, np.NaN, 3.0], common_idx)
+ expected = Series([np.nan, np.inf, np.nan, np.nan, 3.0], common_idx)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_rank.py b/pandas/tests/series/methods/test_rank.py
index 766a2415d89fb..24cf97c05c0a8 100644
--- a/pandas/tests/series/methods/test_rank.py
+++ b/pandas/tests/series/methods/test_rank.py
@@ -185,7 +185,7 @@ def test_rank_categorical(self):
# Test na_option for rank data
na_ser = Series(
- ["first", "second", "third", "fourth", "fifth", "sixth", np.NaN]
+ ["first", "second", "third", "fourth", "fifth", "sixth", np.nan]
).astype(
CategoricalDtype(
["first", "second", "third", "fourth", "fifth", "sixth", "seventh"],
@@ -195,7 +195,7 @@ def test_rank_categorical(self):
exp_top = Series([2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 1.0])
exp_bot = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0])
- exp_keep = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, np.NaN])
+ exp_keep = Series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top"), exp_top)
tm.assert_series_equal(na_ser.rank(na_option="bottom"), exp_bot)
@@ -204,7 +204,7 @@ def test_rank_categorical(self):
# Test na_option for rank data with ascending False
exp_top = Series([7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0])
exp_bot = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, 7.0])
- exp_keep = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, np.NaN])
+ exp_keep = Series([6.0, 5.0, 4.0, 3.0, 2.0, 1.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top", ascending=False), exp_top)
tm.assert_series_equal(
@@ -223,12 +223,12 @@ def test_rank_categorical(self):
na_ser.rank(na_option=True, ascending=False)
# Test with pct=True
- na_ser = Series(["first", "second", "third", "fourth", np.NaN]).astype(
+ na_ser = Series(["first", "second", "third", "fourth", np.nan]).astype(
CategoricalDtype(["first", "second", "third", "fourth"], True)
)
exp_top = Series([0.4, 0.6, 0.8, 1.0, 0.2])
exp_bot = Series([0.2, 0.4, 0.6, 0.8, 1.0])
- exp_keep = Series([0.25, 0.5, 0.75, 1.0, np.NaN])
+ exp_keep = Series([0.25, 0.5, 0.75, 1.0, np.nan])
tm.assert_series_equal(na_ser.rank(na_option="top", pct=True), exp_top)
tm.assert_series_equal(na_ser.rank(na_option="bottom", pct=True), exp_bot)
diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py
index 52446f96009d5..2ab1cd13a31d8 100644
--- a/pandas/tests/series/methods/test_reindex.py
+++ b/pandas/tests/series/methods/test_reindex.py
@@ -194,7 +194,7 @@ def test_reindex_int(datetime_series):
reindexed_int = int_ts.reindex(datetime_series.index)
# if NaNs introduced
- assert reindexed_int.dtype == np.float_
+ assert reindexed_int.dtype == np.float64
# NO NaNs introduced
reindexed_int = int_ts.reindex(int_ts.index[::2])
@@ -425,11 +425,11 @@ def test_reindexing_with_float64_NA_log():
s = Series([1.0, NA], dtype=Float64Dtype())
s_reindex = s.reindex(range(3))
result = s_reindex.values._data
- expected = np.array([1, np.NaN, np.NaN])
+ expected = np.array([1, np.nan, np.nan])
tm.assert_numpy_array_equal(result, expected)
with tm.assert_produces_warning(None):
result_log = np.log(s_reindex)
- expected_log = Series([0, np.NaN, np.NaN], dtype=Float64Dtype())
+ expected_log = Series([0, np.nan, np.nan], dtype=Float64Dtype())
tm.assert_series_equal(result_log, expected_log)
diff --git a/pandas/tests/series/methods/test_sort_values.py b/pandas/tests/series/methods/test_sort_values.py
index c3e074dc68c82..4808272879071 100644
--- a/pandas/tests/series/methods/test_sort_values.py
+++ b/pandas/tests/series/methods/test_sort_values.py
@@ -18,7 +18,7 @@ def test_sort_values(self, datetime_series, using_copy_on_write):
tm.assert_series_equal(expected, result)
ts = datetime_series.copy()
- ts[:5] = np.NaN
+ ts[:5] = np.nan
vals = ts.values
result = ts.sort_values()
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 331afc4345616..611f4a7f790a6 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -165,7 +165,7 @@ def test_constructor(self, datetime_series):
assert id(datetime_series.index) == id(derived.index)
# Mixed type Series
- mixed = Series(["hello", np.NaN], index=[0, 1])
+ mixed = Series(["hello", np.nan], index=[0, 1])
assert mixed.dtype == np.object_
assert np.isnan(mixed[1])
@@ -1464,8 +1464,8 @@ def test_fromDict(self):
assert series.dtype == np.float64
def test_fromValue(self, datetime_series):
- nans = Series(np.NaN, index=datetime_series.index, dtype=np.float64)
- assert nans.dtype == np.float_
+ nans = Series(np.nan, index=datetime_series.index, dtype=np.float64)
+ assert nans.dtype == np.float64
assert len(nans) == len(datetime_series)
strings = Series("foo", index=datetime_series.index)
diff --git a/pandas/tests/series/test_cumulative.py b/pandas/tests/series/test_cumulative.py
index 4c5fd2d44e4f4..e6f7b2a5e69e0 100644
--- a/pandas/tests/series/test_cumulative.py
+++ b/pandas/tests/series/test_cumulative.py
@@ -31,7 +31,7 @@ def test_datetime_series(self, datetime_series, func):
# with missing values
ts = datetime_series.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = func(ts)[1::2]
expected = func(np.array(ts.dropna()))
@@ -47,7 +47,7 @@ def test_cummin_cummax(self, datetime_series, method):
tm.assert_numpy_array_equal(result, expected)
ts = datetime_series.copy()
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = getattr(ts, method)()[1::2]
expected = ufunc(ts.dropna())
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index 4dab3e8f62598..26046ef9ba295 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -93,7 +93,7 @@ def test_logical_operators_int_dtype_with_float(self):
msg = "Cannot perform.+with a dtyped.+array and scalar of type"
with pytest.raises(TypeError, match=msg):
- s_0123 & np.NaN
+ s_0123 & np.nan
with pytest.raises(TypeError, match=msg):
s_0123 & 3.14
msg = "unsupported operand type.+for &:"
@@ -149,11 +149,11 @@ def test_logical_operators_int_dtype_with_object(self):
# GH#9016: support bitwise op for integer types
s_0123 = Series(range(4), dtype="int64")
- result = s_0123 & Series([False, np.NaN, False, False])
+ result = s_0123 & Series([False, np.nan, False, False])
expected = Series([False] * 4)
tm.assert_series_equal(result, expected)
- s_abNd = Series(["a", "b", np.NaN, "d"])
+ s_abNd = Series(["a", "b", np.nan, "d"])
with pytest.raises(TypeError, match="unsupported.* 'int' and 'str'"):
s_0123 & s_abNd
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 9f17f6d86cf93..cafc69c4d0f20 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -84,7 +84,7 @@ def test_logical_range_select(self, datetime_series):
def test_valid(self, datetime_series):
ts = datetime_series.copy()
ts.index = ts.index._with_freq(None)
- ts[::2] = np.NaN
+ ts[::2] = np.nan
result = ts.dropna()
assert len(result) == ts.count()
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index 4c92b5694c43b..f294885fb8f4d 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -83,7 +83,7 @@ def test_string(self, string_series):
str(string_series.astype(int))
# with NaNs
- string_series[5:7] = np.NaN
+ string_series[5:7] = np.nan
str(string_series)
def test_object(self, object_series):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 76784ec726afe..a0062d2b6dd44 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -323,7 +323,7 @@ def check_fun_data(
res = testfunc(testarval, axis=axis, skipna=skipna, **kwargs)
if (
- isinstance(targ, np.complex_)
+ isinstance(targ, np.complex128)
and isinstance(res, float)
and np.isnan(targ)
and np.isnan(res)
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 5e51edfee17f1..93fe9b05adb4f 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -1570,8 +1570,8 @@ def test_convert_object_to_datetime_with_cache(
(Series([""] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
(Series([pd.NA] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
(Series([pd.NA] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
- (Series([np.NaN] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
- (Series([np.NaN] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
+ (Series([np.nan] * 20), Series([NaT] * 20, dtype="datetime64[ns]")),
+ (Series([np.nan] * 60), Series([NaT] * 60, dtype="datetime64[ns]")),
),
)
def test_to_datetime_converts_null_like_to_nat(self, cache, input, expected):
diff --git a/pandas/tests/util/test_assert_almost_equal.py b/pandas/tests/util/test_assert_almost_equal.py
index a86302f158005..8527efdbf7867 100644
--- a/pandas/tests/util/test_assert_almost_equal.py
+++ b/pandas/tests/util/test_assert_almost_equal.py
@@ -293,7 +293,7 @@ def test_assert_almost_equal_null():
_assert_almost_equal_both(None, None)
-@pytest.mark.parametrize("a,b", [(None, np.NaN), (None, 0), (np.NaN, 0)])
+@pytest.mark.parametrize("a,b", [(None, np.nan), (None, 0), (np.nan, 0)])
def test_assert_not_almost_equal_null(a, b):
_assert_not_almost_equal(a, b)
diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py
index 2dd4458172593..73ab470ab97a7 100644
--- a/pandas/tests/window/conftest.py
+++ b/pandas/tests/window/conftest.py
@@ -126,7 +126,7 @@ def series():
"""Make mocked series as fixture."""
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
series = Series(arr, index=bdate_range(datetime(2009, 1, 1), periods=100))
return series
diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py
index d901fe58950e3..33858e10afd75 100644
--- a/pandas/tests/window/test_api.py
+++ b/pandas/tests/window/test_api.py
@@ -223,9 +223,9 @@ def test_count_nonnumeric_types(step):
Period("2012-02"),
Period("2012-03"),
],
- "fl_inf": [1.0, 2.0, np.Inf],
- "fl_nan": [1.0, 2.0, np.NaN],
- "str_nan": ["aa", "bb", np.NaN],
+ "fl_inf": [1.0, 2.0, np.inf],
+ "fl_nan": [1.0, 2.0, np.nan],
+ "str_nan": ["aa", "bb", np.nan],
"dt_nat": dt_nat_col,
"periods_nat": [
Period("2012-01"),
diff --git a/pandas/tests/window/test_apply.py b/pandas/tests/window/test_apply.py
index 6af5a41e96e0a..4e4eca6e772e7 100644
--- a/pandas/tests/window/test_apply.py
+++ b/pandas/tests/window/test_apply.py
@@ -184,8 +184,8 @@ def numpysum(x, par):
def test_nans(raw):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).apply(f, raw=raw)
tm.assert_almost_equal(result.iloc[-1], np.mean(obj[10:-10]))
@@ -210,12 +210,12 @@ def test_nans(raw):
def test_center(raw):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(20, min_periods=15, center=True).apply(f, raw=raw)
expected = (
- concat([obj, Series([np.NaN] * 9)])
+ concat([obj, Series([np.nan] * 9)])
.rolling(20, min_periods=15)
.apply(f, raw=raw)
.iloc[9:]
diff --git a/pandas/tests/window/test_ewm.py b/pandas/tests/window/test_ewm.py
index 45d481fdd2e44..c5c395414b450 100644
--- a/pandas/tests/window/test_ewm.py
+++ b/pandas/tests/window/test_ewm.py
@@ -417,7 +417,7 @@ def test_ewm_alpha():
# GH 10789
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
s = Series(arr)
a = s.ewm(alpha=0.61722699889169674).mean()
@@ -433,7 +433,7 @@ def test_ewm_domain_checks():
# GH 12492
arr = np.random.default_rng(2).standard_normal(100)
locs = np.arange(20, 40)
- arr[locs] = np.NaN
+ arr[locs] = np.nan
s = Series(arr)
msg = "comass must satisfy: comass >= 0"
@@ -484,8 +484,8 @@ def test_ew_empty_series(method):
def test_ew_min_periods(min_periods, name):
# excluding NaNs correctly
arr = np.random.default_rng(2).standard_normal(50)
- arr[:10] = np.NaN
- arr[-10:] = np.NaN
+ arr[:10] = np.nan
+ arr[-10:] = np.nan
s = Series(arr)
# check min_periods
@@ -515,11 +515,11 @@ def test_ew_min_periods(min_periods, name):
else:
# ewm.std, ewm.var with bias=False require at least
# two values
- tm.assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.nan]))
# pass in ints
result2 = getattr(Series(np.arange(50)).ewm(span=10), name)()
- assert result2.dtype == np.float_
+ assert result2.dtype == np.float64
@pytest.mark.parametrize("name", ["cov", "corr"])
@@ -527,8 +527,8 @@ def test_ewm_corr_cov(name):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
B = A[2:] + np.random.default_rng(2).standard_normal(48)
- A[:10] = np.NaN
- B.iloc[-10:] = np.NaN
+ A[:10] = np.nan
+ B.iloc[-10:] = np.nan
result = getattr(A.ewm(com=20, min_periods=5), name)(B)
assert np.isnan(result.values[:14]).all()
@@ -542,8 +542,8 @@ def test_ewm_corr_cov_min_periods(name, min_periods):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
B = A[2:] + np.random.default_rng(2).standard_normal(48)
- A[:10] = np.NaN
- B.iloc[-10:] = np.NaN
+ A[:10] = np.nan
+ B.iloc[-10:] = np.nan
result = getattr(A.ewm(com=20, min_periods=min_periods), name)(B)
# binary functions (ewmcov, ewmcorr) with bias=False require at
@@ -560,13 +560,13 @@ def test_ewm_corr_cov_min_periods(name, min_periods):
result = getattr(Series([1.0]).ewm(com=50, min_periods=min_periods), name)(
Series([1.0])
)
- tm.assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.nan]))
@pytest.mark.parametrize("name", ["cov", "corr"])
def test_different_input_array_raise_exception(name):
A = Series(np.random.default_rng(2).standard_normal(50), index=range(50))
- A[:10] = np.NaN
+ A[:10] = np.nan
msg = "other must be a DataFrame or Series"
# exception raised is Exception
diff --git a/pandas/tests/window/test_pairwise.py b/pandas/tests/window/test_pairwise.py
index 8dac6d271510a..b6f2365afb457 100644
--- a/pandas/tests/window/test_pairwise.py
+++ b/pandas/tests/window/test_pairwise.py
@@ -410,7 +410,7 @@ def test_cov_mulittindex(self):
expected = DataFrame(
np.vstack(
(
- np.full((8, 8), np.NaN),
+ np.full((8, 8), np.nan),
np.full((8, 8), 32.000000),
np.full((8, 8), 63.881919),
)
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index 4df20282bbfa6..70b7534b296f3 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -142,7 +142,7 @@ def test_constructor_timedelta_window_and_minperiods(window, raw):
index=date_range("2017-08-08", periods=n, freq="D"),
)
expected = DataFrame(
- {"value": np.append([np.NaN, 1.0], np.arange(3.0, 27.0, 3))},
+ {"value": np.append([np.nan, 1.0], np.arange(3.0, 27.0, 3))},
index=date_range("2017-08-08", periods=n, freq="D"),
)
result_roll_sum = df.rolling(window=window, min_periods=2).sum()
@@ -1461,15 +1461,15 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
0.03,
0.03,
0.001,
- np.NaN,
+ np.nan,
0.002,
0.008,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
0.005,
0.2,
]
@@ -1480,8 +1480,8 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
0.005,
0.005,
0.008,
- np.NaN,
- np.NaN,
+ np.nan,
+ np.nan,
0.005,
0.102500,
]
@@ -1495,7 +1495,7 @@ def test_rolling_mean_all_nan_window_floating_artifacts(start, exp_values):
def test_rolling_sum_all_nan_window_floating_artifacts():
# GH#41053
- df = DataFrame([0.002, 0.008, 0.005, np.NaN, np.NaN, np.NaN])
+ df = DataFrame([0.002, 0.008, 0.005, np.nan, np.nan, np.nan])
result = df.rolling(3, min_periods=0).sum()
expected = DataFrame([0.002, 0.010, 0.015, 0.013, 0.005, 0.0])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/window/test_rolling_functions.py b/pandas/tests/window/test_rolling_functions.py
index bc0b3e496038c..940f0845befa2 100644
--- a/pandas/tests/window/test_rolling_functions.py
+++ b/pandas/tests/window/test_rolling_functions.py
@@ -150,8 +150,8 @@ def test_time_rule_frame(raw, frame, compare_func, roll_func, kwargs, minp):
)
def test_nans(compare_func, roll_func, kwargs):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(50, min_periods=30), roll_func)(**kwargs)
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -177,8 +177,8 @@ def test_nans(compare_func, roll_func, kwargs):
def test_nans_count():
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).count()
tm.assert_almost_equal(
result.iloc[-1], np.isfinite(obj[10:-10]).astype(float).sum()
@@ -241,15 +241,15 @@ def test_min_periods_count(series, step):
)
def test_center(roll_func, kwargs, minp):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(20, min_periods=minp, center=True), roll_func)(
**kwargs
)
expected = (
getattr(
- concat([obj, Series([np.NaN] * 9)]).rolling(20, min_periods=minp), roll_func
+ concat([obj, Series([np.nan] * 9)]).rolling(20, min_periods=minp), roll_func
)(**kwargs)
.iloc[9:]
.reset_index(drop=True)
diff --git a/pandas/tests/window/test_rolling_quantile.py b/pandas/tests/window/test_rolling_quantile.py
index 32296ae3f2470..d5a7010923563 100644
--- a/pandas/tests/window/test_rolling_quantile.py
+++ b/pandas/tests/window/test_rolling_quantile.py
@@ -89,8 +89,8 @@ def test_time_rule_frame(raw, frame, q):
def test_nans(q):
compare_func = partial(scoreatpercentile, per=q)
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(50, min_periods=30).quantile(q)
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -128,12 +128,12 @@ def test_min_periods(series, minp, q, step):
@pytest.mark.parametrize("q", [0.0, 0.1, 0.5, 0.9, 1.0])
def test_center(q):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = obj.rolling(20, center=True).quantile(q)
expected = (
- concat([obj, Series([np.NaN] * 9)])
+ concat([obj, Series([np.nan] * 9)])
.rolling(20)
.quantile(q)
.iloc[9:]
diff --git a/pandas/tests/window/test_rolling_skew_kurt.py b/pandas/tests/window/test_rolling_skew_kurt.py
index ada726401c4a0..79c14f243e7cc 100644
--- a/pandas/tests/window/test_rolling_skew_kurt.py
+++ b/pandas/tests/window/test_rolling_skew_kurt.py
@@ -79,8 +79,8 @@ def test_nans(sp_func, roll_func):
compare_func = partial(getattr(sp_stats, sp_func), bias=False)
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(50, min_periods=30), roll_func)()
tm.assert_almost_equal(result.iloc[-1], compare_func(obj[10:-10]))
@@ -122,12 +122,12 @@ def test_min_periods(series, minp, roll_func, step):
@pytest.mark.parametrize("roll_func", ["kurt", "skew"])
def test_center(roll_func):
obj = Series(np.random.default_rng(2).standard_normal(50))
- obj[:10] = np.NaN
- obj[-10:] = np.NaN
+ obj[:10] = np.nan
+ obj[-10:] = np.nan
result = getattr(obj.rolling(20, center=True), roll_func)()
expected = (
- getattr(concat([obj, Series([np.NaN] * 9)]).rolling(20), roll_func)()
+ getattr(concat([obj, Series([np.nan] * 9)]).rolling(20), roll_func)()
.iloc[9:]
.reset_index(drop=True)
)
@@ -170,14 +170,14 @@ def test_center_reindex_frame(frame, roll_func):
def test_rolling_skew_edge_cases(step):
- expected = Series([np.NaN] * 4 + [0.0])[::step]
+ expected = Series([np.nan] * 4 + [0.0])[::step]
# yields all NaN (0 variance)
d = Series([1] * 5)
x = d.rolling(window=5, step=step).skew()
# index 4 should be 0 as it contains 5 same obs
tm.assert_series_equal(expected, x)
- expected = Series([np.NaN] * 5)[::step]
+ expected = Series([np.nan] * 5)[::step]
# yields all NaN (window too small)
d = Series(np.random.default_rng(2).standard_normal(5))
x = d.rolling(window=2, step=step).skew()
@@ -185,13 +185,13 @@ def test_rolling_skew_edge_cases(step):
# yields [NaN, NaN, NaN, 0.177994, 1.548824]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401])
- expected = Series([np.NaN, np.NaN, np.NaN, 0.177994, 1.548824])[::step]
+ expected = Series([np.nan, np.nan, np.nan, 0.177994, 1.548824])[::step]
x = d.rolling(window=4, step=step).skew()
tm.assert_series_equal(expected, x)
def test_rolling_kurt_edge_cases(step):
- expected = Series([np.NaN] * 4 + [-3.0])[::step]
+ expected = Series([np.nan] * 4 + [-3.0])[::step]
# yields all NaN (0 variance)
d = Series([1] * 5)
@@ -199,14 +199,14 @@ def test_rolling_kurt_edge_cases(step):
tm.assert_series_equal(expected, x)
# yields all NaN (window too small)
- expected = Series([np.NaN] * 5)[::step]
+ expected = Series([np.nan] * 5)[::step]
d = Series(np.random.default_rng(2).standard_normal(5))
x = d.rolling(window=3, step=step).kurt()
tm.assert_series_equal(expected, x)
# yields [NaN, NaN, NaN, 1.224307, 2.671499]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401])
- expected = Series([np.NaN, np.NaN, np.NaN, 1.224307, 2.671499])[::step]
+ expected = Series([np.nan, np.nan, np.nan, 1.224307, 2.671499])[::step]
x = d.rolling(window=4, step=step).kurt()
tm.assert_series_equal(expected, x)
diff --git a/pandas/tests/window/test_win_type.py b/pandas/tests/window/test_win_type.py
index 2ca02fef796ed..5052019ddb726 100644
--- a/pandas/tests/window/test_win_type.py
+++ b/pandas/tests/window/test_win_type.py
@@ -666,7 +666,7 @@ def test_weighted_var_big_window_no_segfault(win_types, center):
pytest.importorskip("scipy")
x = Series(0)
result = x.rolling(window=16, center=center, win_type=win_types).var()
- expected = Series(np.NaN)
+ expected = Series(np.nan)
tm.assert_series_equal(result, expected)
| Hi!
Due to NumPy's main namespace being changed in https://github.com/numpy/numpy/pull/24376, here I update NumPy's imports/aliases: e.g. `np.NaN` and `np.infty` won't be available anymore. Also, a canonical name for NumPy's `float` and `complex` should be used, which are `np.float64` and `np.complex128`.
`sctypes` will be removed from the top namespace, but its destination is not decided yet - therefore I import it from internal `np.core` for now. | https://api.github.com/repos/pandas-dev/pandas/pulls/54579 | 2023-08-16T13:21:18Z | 2023-08-16T20:20:36Z | 2023-08-16T20:20:36Z | 2023-08-16T20:21:13Z |
Added an Array Extension which include missing or NAN values while wo… | diff --git a/pandas/arrays/__init__.py b/pandas/arrays/__init__.py
index 32e2afc0eef52..132b026e527a1 100644
--- a/pandas/arrays/__init__.py
+++ b/pandas/arrays/__init__.py
@@ -1,7 +1,7 @@
"""
All of pandas' ExtensionArrays.
-See :ref:`extending.extension-types` for more.
+ref:`extending.extension-types` for more about ExtensionArrays.
"""
from pandas.core.arrays import (
ArrowExtensionArray,
@@ -33,6 +33,7 @@
"SparseArray",
"StringArray",
"TimedeltaArray",
+ "NullableArray",
]
| …rking with interger data types.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54578 | 2023-08-16T12:41:20Z | 2023-08-16T17:50:12Z | null | 2023-08-16T17:50:13Z |
TST: annotations, fix test_invert | diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index 064242f3649f4..d5eb65ec9d35d 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -239,9 +239,23 @@ def test_compare_array(self, data, comparison_op):
class BaseUnaryOpsTests(BaseOpsUtil):
def test_invert(self, data):
ser = pd.Series(data, name="name")
- result = ~ser
- expected = pd.Series(~data, name="name")
- tm.assert_series_equal(result, expected)
+ try:
+ # 10 is an arbitrary choice here, just avoid iterating over
+ # the whole array to trim test runtime
+ [~x for x in data[:10]]
+ except TypeError:
+ # scalars don't support invert -> we don't expect the vectorized
+ # operation to succeed
+ with pytest.raises(TypeError):
+ ~ser
+ with pytest.raises(TypeError):
+ ~data
+ else:
+ # Note we do not re-use the pointwise result to construct expected
+ # because python semantics for negating bools are weird see GH#54569
+ result = ~ser
+ expected = pd.Series(~data, name="name")
+ tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("ufunc", [np.positive, np.negative, np.abs])
def test_unary_ufunc_dunder_equivalence(self, data, ufunc):
diff --git a/pandas/tests/extension/base/reduce.py b/pandas/tests/extension/base/reduce.py
index a6532a6190467..9b56b10681e15 100644
--- a/pandas/tests/extension/base/reduce.py
+++ b/pandas/tests/extension/base/reduce.py
@@ -13,22 +13,23 @@ class BaseReduceTests:
make sense for numeric/boolean operations.
"""
- def _supports_reduction(self, obj, op_name: str) -> bool:
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
# Specify if we expect this reduction to succeed.
return False
- def check_reduce(self, s, op_name, skipna):
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
# We perform the same operation on the np.float64 data and check
# that the results match. Override if you need to cast to something
# other than float64.
- res_op = getattr(s, op_name)
+ res_op = getattr(ser, op_name)
try:
- alt = s.astype("float64")
- except TypeError:
- # e.g. Interval can't cast, so let's cast to object and do
+ alt = ser.astype("float64")
+ except (TypeError, ValueError):
+ # e.g. Interval can't cast (TypeError), StringArray can't cast
+ # (ValueError), so let's cast to object and do
# the reduction pointwise
- alt = s.astype(object)
+ alt = ser.astype(object)
exp_op = getattr(alt, op_name)
if op_name == "count":
@@ -79,53 +80,53 @@ def check_reduce_frame(self, ser: pd.Series, op_name: str, skipna: bool):
@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
op_name = all_boolean_reductions
- s = pd.Series(data)
+ ser = pd.Series(data)
- if not self._supports_reduction(s, op_name):
+ if not self._supports_reduction(ser, op_name):
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
"does not support reduction|"
)
with pytest.raises(TypeError, match=msg):
- getattr(s, op_name)(skipna=skipna)
+ getattr(ser, op_name)(skipna=skipna)
else:
- self.check_reduce(s, op_name, skipna)
+ self.check_reduce(ser, op_name, skipna)
@pytest.mark.filterwarnings("ignore::RuntimeWarning")
@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
op_name = all_numeric_reductions
- s = pd.Series(data)
+ ser = pd.Series(data)
- if not self._supports_reduction(s, op_name):
+ if not self._supports_reduction(ser, op_name):
msg = (
"[Cc]annot perform|Categorical is not ordered for operation|"
"does not support reduction|"
)
with pytest.raises(TypeError, match=msg):
- getattr(s, op_name)(skipna=skipna)
+ getattr(ser, op_name)(skipna=skipna)
else:
# min/max with empty produce numpy warnings
- self.check_reduce(s, op_name, skipna)
+ self.check_reduce(ser, op_name, skipna)
@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_frame(self, data, all_numeric_reductions, skipna):
op_name = all_numeric_reductions
- s = pd.Series(data)
- if not is_numeric_dtype(s.dtype):
+ ser = pd.Series(data)
+ if not is_numeric_dtype(ser.dtype):
pytest.skip("not numeric dtype")
if op_name in ["count", "kurt", "sem"]:
pytest.skip(f"{op_name} not an array method")
- if not self._supports_reduction(s, op_name):
+ if not self._supports_reduction(ser, op_name):
pytest.skip(f"Reduction {op_name} not supported for this dtype")
- self.check_reduce_frame(s, op_name, skipna)
+ self.check_reduce_frame(ser, op_name, skipna)
# TODO: deprecate BaseNoReduceTests, BaseNumericReduceTests, BaseBooleanReduceTests
@@ -135,7 +136,7 @@ class BaseNoReduceTests(BaseReduceTests):
class BaseNumericReduceTests(BaseReduceTests):
# For backward compatibility only, this only runs the numeric reductions
- def _supports_reduction(self, obj, op_name: str) -> bool:
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
if op_name in ["any", "all"]:
pytest.skip("These are tested in BaseBooleanReduceTests")
return True
@@ -143,7 +144,7 @@ def _supports_reduction(self, obj, op_name: str) -> bool:
class BaseBooleanReduceTests(BaseReduceTests):
# For backward compatibility only, this only runs the numeric reductions
- def _supports_reduction(self, obj, op_name: str) -> bool:
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
if op_name not in ["any", "all"]:
pytest.skip("These are tested in BaseNumericReduceTests")
return True
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index baa056550624f..2f274354f0da0 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -71,15 +71,15 @@ def _get_expected_exception(
) -> type[Exception] | None:
return None
- def _supports_reduction(self, obj, op_name: str) -> bool:
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
return True
- def check_reduce(self, s, op_name, skipna):
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
if op_name == "count":
- return super().check_reduce(s, op_name, skipna)
+ return super().check_reduce(ser, op_name, skipna)
else:
- result = getattr(s, op_name)(skipna=skipna)
- expected = getattr(np.asarray(s), op_name)()
+ result = getattr(ser, op_name)(skipna=skipna)
+ expected = getattr(np.asarray(ser), op_name)()
tm.assert_almost_equal(result, expected)
def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna, request):
@@ -216,12 +216,6 @@ def test_series_repr(self, data):
assert data.dtype.name in repr(ser)
assert "Decimal: " in repr(ser)
- @pytest.mark.xfail(
- reason="Looks like the test (incorrectly) implicitly assumes int/bool dtype"
- )
- def test_invert(self, data):
- super().test_invert(data)
-
@pytest.mark.xfail(reason="Inconsistent array-vs-scalar behavior")
@pytest.mark.parametrize("ufunc", [np.positive, np.negative, np.abs])
def test_unary_ufunc_dunder_equivalence(self, data, ufunc):
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 4c05049ddfcf5..35184450e9c11 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -401,8 +401,8 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna, reques
self.check_accumulate(ser, op_name, skipna)
- def _supports_reduction(self, obj, op_name: str) -> bool:
- dtype = tm.get_dtype(obj)
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
+ dtype = ser.dtype
# error: Item "dtype[Any]" of "dtype[Any] | ExtensionDtype" has
# no attribute "pyarrow_dtype"
pa_dtype = dtype.pyarrow_dtype # type: ignore[union-attr]
@@ -445,20 +445,25 @@ def _supports_reduction(self, obj, op_name: str) -> bool:
return True
- def check_reduce(self, ser, op_name, skipna):
- pa_dtype = ser.dtype.pyarrow_dtype
- if op_name == "count":
- result = getattr(ser, op_name)()
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
+ # error: Item "dtype[Any]" of "dtype[Any] | ExtensionDtype" has no
+ # attribute "pyarrow_dtype"
+ pa_dtype = ser.dtype.pyarrow_dtype # type: ignore[union-attr]
+ if pa.types.is_integer(pa_dtype) or pa.types.is_floating(pa_dtype):
+ alt = ser.astype("Float64")
else:
- result = getattr(ser, op_name)(skipna=skipna)
+ # TODO: in the opposite case, aren't we testing... nothing? For
+ # e.g. date/time dtypes trying to calculate 'expected' by converting
+ # to object will raise for mean, std etc
+ alt = ser
- if pa.types.is_integer(pa_dtype) or pa.types.is_floating(pa_dtype):
- ser = ser.astype("Float64")
# TODO: in the opposite case, aren't we testing... nothing?
if op_name == "count":
- expected = getattr(ser, op_name)()
+ result = getattr(ser, op_name)()
+ expected = getattr(alt, op_name)()
else:
- expected = getattr(ser, op_name)(skipna=skipna)
+ result = getattr(ser, op_name)(skipna=skipna)
+ expected = getattr(alt, op_name)(skipna=skipna)
tm.assert_almost_equal(result, expected)
@pytest.mark.parametrize("skipna", [True, False])
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 3ceb32f181986..79b8e9ddbf8f5 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -179,12 +179,6 @@ def _compare_other(self, s, data, op, other):
def test_array_repr(self, data, size):
super().test_array_repr(data, size)
- @pytest.mark.xfail(
- reason="Looks like the test (incorrectly) implicitly assumes int/bool dtype"
- )
- def test_invert(self, data):
- super().test_invert(data)
-
@pytest.mark.xfail(reason="TBD")
@pytest.mark.parametrize("as_index", [True, False])
def test_groupby_extension_agg(self, as_index, data_for_grouping):
diff --git a/pandas/tests/extension/test_interval.py b/pandas/tests/extension/test_interval.py
index 66b25abb55961..f37ac4b289852 100644
--- a/pandas/tests/extension/test_interval.py
+++ b/pandas/tests/extension/test_interval.py
@@ -13,6 +13,10 @@
be added to the array-specific tests in `pandas/tests/arrays/`.
"""
+from __future__ import annotations
+
+from typing import TYPE_CHECKING
+
import numpy as np
import pytest
@@ -22,6 +26,9 @@
from pandas.core.arrays import IntervalArray
from pandas.tests.extension import base
+if TYPE_CHECKING:
+ import pandas as pd
+
def make_data():
N = 100
@@ -73,7 +80,7 @@ def data_for_grouping():
class TestIntervalArray(base.ExtensionTests):
divmod_exc = TypeError
- def _supports_reduction(self, obj, op_name: str) -> bool:
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
return op_name in ["min", "max"]
@pytest.mark.xfail(
@@ -89,12 +96,6 @@ def test_EA_types(self, engine, data):
with pytest.raises(NotImplementedError, match=expected_msg):
super().test_EA_types(engine, data)
- @pytest.mark.xfail(
- reason="Looks like the test (incorrectly) implicitly assumes int/bool dtype"
- )
- def test_invert(self, data):
- super().test_invert(data)
-
# TODO: either belongs in tests.arrays.interval or move into base tests.
def test_fillna_non_scalar_raises(data_missing):
diff --git a/pandas/tests/extension/test_masked.py b/pandas/tests/extension/test_masked.py
index bed406e902483..7efb8fbad8cd1 100644
--- a/pandas/tests/extension/test_masked.py
+++ b/pandas/tests/extension/test_masked.py
@@ -238,8 +238,8 @@ def test_combine_le(self, data_repeated):
self._combine_le_expected_dtype = object
super().test_combine_le(data_repeated)
- def _supports_reduction(self, obj, op_name: str) -> bool:
- if op_name in ["any", "all"] and tm.get_dtype(obj).kind != "b":
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
+ if op_name in ["any", "all"] and ser.dtype.kind != "b":
pytest.skip(reason="Tested in tests/reductions/test_reductions.py")
return True
@@ -256,12 +256,16 @@ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
if op_name in ["min", "max"]:
cmp_dtype = "bool"
+ # TODO: prod with integer dtypes does *not* match the result we would
+ # get if we used object for cmp_dtype. In that cae the object result
+ # is a large integer while the non-object case overflows and returns 0
+ alt = ser.dropna().astype(cmp_dtype)
if op_name == "count":
result = getattr(ser, op_name)()
- expected = getattr(ser.dropna().astype(cmp_dtype), op_name)()
+ expected = getattr(alt, op_name)()
else:
result = getattr(ser, op_name)(skipna=skipna)
- expected = getattr(ser.dropna().astype(cmp_dtype), op_name)(skipna=skipna)
+ expected = getattr(alt, op_name)(skipna=skipna)
if not skipna and ser.isna().any() and op_name not in ["any", "all"]:
expected = pd.NA
tm.assert_almost_equal(result, expected)
@@ -350,15 +354,6 @@ def check_accumulate(self, ser: pd.Series, op_name: str, skipna: bool):
else:
raise NotImplementedError(f"{op_name} not supported")
- def test_invert(self, data, request):
- if data.dtype.kind == "f":
- mark = pytest.mark.xfail(
- reason="Looks like the base class test implicitly assumes "
- "boolean/integer dtypes"
- )
- request.node.add_marker(mark)
- super().test_invert(data)
-
class Test2DCompat(base.Dim2CompatTests):
pass
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index a54729de57a97..542e938d1a40a 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -302,15 +302,19 @@ class TestPrinting(BaseNumPyTests, base.BasePrintingTests):
class TestReduce(BaseNumPyTests, base.BaseReduceTests):
- def _supports_reduction(self, obj, op_name: str) -> bool:
- if tm.get_dtype(obj).kind == "O":
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
+ if ser.dtype.kind == "O":
return op_name in ["sum", "min", "max", "any", "all"]
return True
- def check_reduce(self, s, op_name, skipna):
- res_op = getattr(s, op_name)
+ def check_reduce(self, ser: pd.Series, op_name: str, skipna: bool):
+ res_op = getattr(ser, op_name)
# avoid coercing int -> float. Just cast to the actual numpy type.
- exp_op = getattr(s.astype(s.dtype._dtype), op_name)
+ # error: Item "ExtensionDtype" of "dtype[Any] | ExtensionDtype" has
+ # no attribute "numpy_dtype"
+ cmp_dtype = ser.dtype.numpy_dtype # type: ignore[union-attr]
+ alt = ser.astype(cmp_dtype)
+ exp_op = getattr(alt, op_name)
if op_name == "count":
result = res_op()
expected = exp_op()
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index 6597ff84e3ca4..c3440b3bdb318 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -157,16 +157,8 @@ def test_fillna_no_op_returns_copy(self, data):
class TestReduce(base.BaseReduceTests):
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
- op_name = all_numeric_reductions
-
- if op_name in ["min", "max"]:
- return None
-
- ser = pd.Series(data)
- with pytest.raises(TypeError):
- getattr(ser, op_name)(skipna=skipna)
+ def _supports_reduction(self, ser: pd.Series, op_name: str) -> bool:
+ return op_name in ["min", "max"]
class TestMethods(base.BaseMethodsTests):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54576 | 2023-08-16T03:39:16Z | 2023-08-17T15:52:51Z | 2023-08-17T15:52:51Z | 2023-08-21T16:05:39Z |
BUG: _validate_setitem_value fails to raise for PandasArray #51044 | diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 99a3586871d10..e35ceb8b662e1 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -15,7 +15,10 @@
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.astype import astype_array
-from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
+from pandas.core.dtypes.cast import (
+ construct_1d_object_array_from_listlike,
+ np_can_hold_element,
+)
from pandas.core.dtypes.common import pandas_dtype
from pandas.core.dtypes.dtypes import NumpyEADtype
from pandas.core.dtypes.missing import isna
@@ -507,6 +510,53 @@ def to_numpy(
return result
+ def _validate_setitem_value(self, value):
+ if type(value) == int:
+ try:
+ np_can_hold_element(self.dtype, value)
+ except Exception:
+ pass
+ return value
+ elif type(value) == float:
+ if (
+ self.dtype
+ in [
+ NumpyEADtype("float32"),
+ NumpyEADtype("float64"),
+ NumpyEADtype("object"),
+ ]
+ or self.dtype is None
+ ):
+ return value
+ elif type(value) not in [int, float] and (
+ self.dtype
+ not in [
+ NumpyEADtype("int64"),
+ NumpyEADtype("float64"),
+ NumpyEADtype("uint16"),
+ NumpyEADtype("object"),
+ ]
+ or lib.is_list_like(value)
+ ):
+ return value
+ if self.dtype is None:
+ return value
+ if not isinstance(self.dtype, NumpyEADtype):
+ return value
+ if (
+ NumpyEADtype(type(value)) == NumpyEADtype(self.dtype)
+ or NumpyEADtype(type(value)) == self.dtype
+ ):
+ return value
+ if self.dtype == NumpyEADtype("object"):
+ return value
+
+ raise TypeError(
+ "value cannot be inserted without changing the dtype. value:"
+ f"{value}, type(value): {type(value)}, NumpyEADtype(type(value)):"
+ f" {NumpyEADtype(type(value))}, self.dtype: {self.dtype}"
+ )
+
# ------------------------------------------------------------------------
# Ops
diff --git a/pandas/tests/arrays/numpy_/test_numpy.py b/pandas/tests/arrays/numpy_/test_numpy.py
index 4217745e60e76..54f49aca6cc3e 100644
--- a/pandas/tests/arrays/numpy_/test_numpy.py
+++ b/pandas/tests/arrays/numpy_/test_numpy.py
@@ -279,7 +279,7 @@ def test_setitem_no_coercion():
# With a value that we do coerce, check that we coerce the value
# and not the underlying array.
- arr[0] = 2.5
+ arr[0] = 2
assert isinstance(arr[0], (int, np.integer)), type(arr[0])
@@ -295,7 +295,7 @@ def test_setitem_preserves_views():
assert view2[0] == 9
assert view3[0] == 9
- arr[-1] = 2.5
+ arr[-1] = 2
view1[-1] = 5
assert arr[-1] == 5
@@ -322,3 +322,12 @@ def test_factorize_unsigned():
tm.assert_numpy_array_equal(res_codes, exp_codes)
tm.assert_extension_array_equal(res_unique, NumpyExtensionArray(exp_unique))
+
+
+def test_array_validate_setitem_value():
+ # Issue# 51044
+ arr = pd.Series(range(5)).array
+ with pytest.raises(TypeError, match="str"):
+ arr._validate_setitem_value("foo")
+ with pytest.raises(TypeError, match="float"):
+ arr._validate_setitem_value(1.5)
| - [x] closes #51044 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54575 | 2023-08-16T02:58:59Z | 2023-10-12T16:49:44Z | null | 2023-10-12T16:49:45Z |
ENH: add cummax/cummin/cumprod support for arrow dtypes | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index d1a689dc60830..8a9f786fa87b2 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -264,6 +264,7 @@ Other enhancements
- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to lzma.LZMAFile (:issue:`52979`)
- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype objects (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
+- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.GroupBy.quantile` (:issue:`51722`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 0f46e5a4e7482..3c65e6b4879e2 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1389,6 +1389,9 @@ def _accumulate(
NotImplementedError : subclass does not define accumulations
"""
pyarrow_name = {
+ "cummax": "cumulative_max",
+ "cummin": "cumulative_min",
+ "cumprod": "cumulative_prod_checked",
"cumsum": "cumulative_sum_checked",
}.get(name, name)
pyarrow_meth = getattr(pc, pyarrow_name, None)
@@ -1398,12 +1401,20 @@ def _accumulate(
data_to_accum = self._pa_array
pa_dtype = data_to_accum.type
- if pa.types.is_duration(pa_dtype):
- data_to_accum = data_to_accum.cast(pa.int64())
+
+ convert_to_int = (
+ pa.types.is_temporal(pa_dtype) and name in ["cummax", "cummin"]
+ ) or (pa.types.is_duration(pa_dtype) and name == "cumsum")
+
+ if convert_to_int:
+ if pa_dtype.bit_width == 32:
+ data_to_accum = data_to_accum.cast(pa.int32())
+ else:
+ data_to_accum = data_to_accum.cast(pa.int64())
result = pyarrow_meth(data_to_accum, skip_nulls=skipna, **kwargs)
- if pa.types.is_duration(pa_dtype):
+ if convert_to_int:
result = result.cast(pa_dtype)
return type(self)(result)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 4c05049ddfcf5..01dedc86c8dca 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -339,10 +339,15 @@ def test_from_sequence_of_strings_pa_array(self, data, request):
def check_accumulate(self, ser, op_name, skipna):
result = getattr(ser, op_name)(skipna=skipna)
- if ser.dtype.kind == "m":
+ pa_type = ser.dtype.pyarrow_dtype
+ if pa.types.is_temporal(pa_type):
# Just check that we match the integer behavior.
- ser = ser.astype("int64[pyarrow]")
- result = result.astype("int64[pyarrow]")
+ if pa_type.bit_width == 32:
+ int_type = "int32[pyarrow]"
+ else:
+ int_type = "int64[pyarrow]"
+ ser = ser.astype(int_type)
+ result = result.astype(int_type)
result = result.astype("Float64")
expected = getattr(ser.astype("Float64"), op_name)(skipna=skipna)
@@ -353,14 +358,20 @@ def _supports_accumulation(self, ser: pd.Series, op_name: str) -> bool:
# attribute "pyarrow_dtype"
pa_type = ser.dtype.pyarrow_dtype # type: ignore[union-attr]
- if pa.types.is_string(pa_type) or pa.types.is_binary(pa_type):
- if op_name in ["cumsum", "cumprod"]:
+ if (
+ pa.types.is_string(pa_type)
+ or pa.types.is_binary(pa_type)
+ or pa.types.is_decimal(pa_type)
+ ):
+ if op_name in ["cumsum", "cumprod", "cummax", "cummin"]:
return False
- elif pa.types.is_temporal(pa_type) and not pa.types.is_duration(pa_type):
- if op_name in ["cumsum", "cumprod"]:
+ elif pa.types.is_boolean(pa_type):
+ if op_name in ["cumprod", "cummax", "cummin"]:
return False
- elif pa.types.is_duration(pa_type):
- if op_name == "cumprod":
+ elif pa.types.is_temporal(pa_type):
+ if op_name == "cumsum" and not pa.types.is_duration(pa_type):
+ return False
+ elif op_name == "cumprod":
return False
return True
@@ -376,7 +387,9 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna, reques
data, all_numeric_accumulations, skipna
)
- if all_numeric_accumulations != "cumsum" or pa_version_under9p0:
+ if pa_version_under9p0 or (
+ pa_version_under13p0 and all_numeric_accumulations != "cumsum"
+ ):
# xfailing takes a long time to run because pytest
# renders the exception messages even when not showing them
opt = request.config.option
| - [x] closes #52085
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.0.rst` file if fixing a bug or adding a new feature.
un-xfails 186 tests which saves a chunk of time in `test_arrow.py`
The new pyarrow cumulative functions are only available starting in pyarrow version 13 which is not yet released but is currently tested in CI via the pyarrow nightly job. We could hold off on merging until pyarrow 13 is released if there are any concerns.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54574 | 2023-08-16T00:47:17Z | 2023-08-17T15:54:13Z | 2023-08-17T15:54:13Z | 2023-09-06T00:54:08Z |
TST: use single-class pattern for Arrow, Masked tests | diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index dd1ff925adf5f..4c05049ddfcf5 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -265,7 +265,7 @@ def data_for_twos(data):
# TODO: skip otherwise?
-class TestBaseCasting(base.BaseCastingTests):
+class TestArrowArray(base.ExtensionTests):
def test_astype_str(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
if pa.types.is_binary(pa_dtype):
@@ -276,8 +276,6 @@ def test_astype_str(self, data, request):
)
super().test_astype_str(data)
-
-class TestConstructors(base.BaseConstructorsTests):
def test_from_dtype(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
if pa.types.is_string(pa_dtype) or pa.types.is_decimal(pa_dtype):
@@ -338,12 +336,6 @@ def test_from_sequence_of_strings_pa_array(self, data, request):
result = type(data)._from_sequence_of_strings(pa_array, dtype=data.dtype)
tm.assert_extension_array_equal(result, data)
-
-class TestGetitemTests(base.BaseGetitemTests):
- pass
-
-
-class TestBaseAccumulateTests(base.BaseAccumulateTests):
def check_accumulate(self, ser, op_name, skipna):
result = getattr(ser, op_name)(skipna=skipna)
@@ -409,8 +401,6 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna, reques
self.check_accumulate(ser, op_name, skipna)
-
-class TestReduce(base.BaseReduceTests):
def _supports_reduction(self, obj, op_name: str) -> bool:
dtype = tm.get_dtype(obj)
# error: Item "dtype[Any]" of "dtype[Any] | ExtensionDtype" has
@@ -561,8 +551,6 @@ def test_median_not_approximate(self, typ):
result = pd.Series([1, 2], dtype=f"{typ}[pyarrow]").median()
assert result == 1.5
-
-class TestBaseGroupby(base.BaseGroupbyTests):
def test_in_numeric_groupby(self, data_for_grouping):
dtype = data_for_grouping.dtype
if is_string_dtype(dtype):
@@ -583,8 +571,6 @@ def test_in_numeric_groupby(self, data_for_grouping):
else:
super().test_in_numeric_groupby(data_for_grouping)
-
-class TestBaseDtype(base.BaseDtypeTests):
def test_construct_from_string_own_name(self, dtype, request):
pa_dtype = dtype.pyarrow_dtype
if pa.types.is_decimal(pa_dtype):
@@ -651,20 +637,12 @@ def test_is_not_string_type(self, dtype):
else:
super().test_is_not_string_type(dtype)
-
-class TestBaseIndex(base.BaseIndexTests):
- pass
-
-
-class TestBaseInterface(base.BaseInterfaceTests):
@pytest.mark.xfail(
reason="GH 45419: pyarrow.ChunkedArray does not support views.", run=False
)
def test_view(self, data):
super().test_view(data)
-
-class TestBaseMissing(base.BaseMissingTests):
def test_fillna_no_op_returns_copy(self, data):
data = data[~data.isna()]
@@ -677,28 +655,18 @@ def test_fillna_no_op_returns_copy(self, data):
assert result is not data
tm.assert_extension_array_equal(result, data)
-
-class TestBasePrinting(base.BasePrintingTests):
- pass
-
-
-class TestBaseReshaping(base.BaseReshapingTests):
@pytest.mark.xfail(
reason="GH 45419: pyarrow.ChunkedArray does not support views", run=False
)
def test_transpose(self, data):
super().test_transpose(data)
-
-class TestBaseSetitem(base.BaseSetitemTests):
@pytest.mark.xfail(
reason="GH 45419: pyarrow.ChunkedArray does not support views", run=False
)
def test_setitem_preserves_views(self, data):
super().test_setitem_preserves_views(data)
-
-class TestBaseParsing(base.BaseParsingTests):
@pytest.mark.parametrize("dtype_backend", ["pyarrow", no_default])
@pytest.mark.parametrize("engine", ["c", "python"])
def test_EA_types(self, engine, data, dtype_backend, request):
@@ -736,8 +704,6 @@ def test_EA_types(self, engine, data, dtype_backend, request):
expected = df
tm.assert_frame_equal(result, expected)
-
-class TestBaseUnaryOps(base.BaseUnaryOpsTests):
def test_invert(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
if not pa.types.is_boolean(pa_dtype):
@@ -749,8 +715,6 @@ def test_invert(self, data, request):
)
super().test_invert(data)
-
-class TestBaseMethods(base.BaseMethodsTests):
@pytest.mark.parametrize("periods", [1, -2])
def test_diff(self, data, periods, request):
pa_dtype = data.dtype.pyarrow_dtype
@@ -814,8 +778,6 @@ def test_argreduce_series(
_combine_le_expected_dtype = "bool[pyarrow]"
-
-class TestBaseArithmeticOps(base.BaseArithmeticOpsTests):
divmod_exc = NotImplementedError
def get_op_from_name(self, op_name):
@@ -838,6 +800,9 @@ def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
# while ArrowExtensionArray maintains original type
expected = pointwise_result
+ if op_name in ["eq", "ne", "lt", "le", "gt", "ge"]:
+ return pointwise_result.astype("boolean[pyarrow]")
+
was_frame = False
if isinstance(expected, pd.DataFrame):
was_frame = True
@@ -1121,28 +1086,6 @@ def test_add_series_with_extension_array(self, data, request):
)
super().test_add_series_with_extension_array(data)
-
-class TestBaseComparisonOps(base.BaseComparisonOpsTests):
- def test_compare_array(self, data, comparison_op, na_value):
- ser = pd.Series(data)
- # pd.Series([ser.iloc[0]] * len(ser)) may not return ArrowExtensionArray
- # since ser.iloc[0] is a python scalar
- other = pd.Series(pd.array([ser.iloc[0]] * len(ser), dtype=data.dtype))
- if comparison_op.__name__ in ["eq", "ne"]:
- # comparison should match point-wise comparisons
- result = comparison_op(ser, other)
- # Series.combine does not calculate the NA mask correctly
- # when comparing over an array
- assert result[8] is na_value
- assert result[97] is na_value
- expected = ser.combine(other, comparison_op)
- expected[8] = na_value
- expected[97] = na_value
- tm.assert_series_equal(result, expected)
-
- else:
- return super().test_compare_array(data, comparison_op)
-
def test_invalid_other_comp(self, data, comparison_op):
# GH 48833
with pytest.raises(
diff --git a/pandas/tests/extension/test_masked.py b/pandas/tests/extension/test_masked.py
index c4195be8ea121..bed406e902483 100644
--- a/pandas/tests/extension/test_masked.py
+++ b/pandas/tests/extension/test_masked.py
@@ -159,11 +159,7 @@ def data_for_grouping(dtype):
return pd.array([b, b, na, na, a, a, b, c], dtype=dtype)
-class TestDtype(base.BaseDtypeTests):
- pass
-
-
-class TestArithmeticOps(base.BaseArithmeticOpsTests):
+class TestMaskedArrays(base.ExtensionTests):
def _get_expected_exception(self, op_name, obj, other):
try:
dtype = tm.get_dtype(obj)
@@ -179,12 +175,15 @@ def _get_expected_exception(self, op_name, obj, other):
# exception message would include "numpy boolean subtract""
return TypeError
return None
- return super()._get_expected_exception(op_name, obj, other)
+ return None
def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
sdtype = tm.get_dtype(obj)
expected = pointwise_result
+ if op_name in ("eq", "ne", "le", "ge", "lt", "gt"):
+ return expected.astype("boolean")
+
if sdtype.kind in "iu":
if op_name in ("__rtruediv__", "__truediv__", "__div__"):
expected = expected.fillna(np.nan).astype("Float64")
@@ -219,11 +218,6 @@ def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
expected = expected.astype(sdtype)
return expected
- series_scalar_exc = None
- series_array_exc = None
- frame_scalar_exc = None
- divmod_exc = None
-
def test_divmod_series_array(self, data, data_for_twos, request):
if data.dtype.kind == "b":
mark = pytest.mark.xfail(
@@ -234,49 +228,6 @@ def test_divmod_series_array(self, data, data_for_twos, request):
request.node.add_marker(mark)
super().test_divmod_series_array(data, data_for_twos)
-
-class TestComparisonOps(base.BaseComparisonOpsTests):
- series_scalar_exc = None
- series_array_exc = None
- frame_scalar_exc = None
-
- def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
- return pointwise_result.astype("boolean")
-
-
-class TestInterface(base.BaseInterfaceTests):
- pass
-
-
-class TestConstructors(base.BaseConstructorsTests):
- pass
-
-
-class TestReshaping(base.BaseReshapingTests):
- pass
-
- # for test_concat_mixed_dtypes test
- # concat of an Integer and Int coerces to object dtype
- # TODO(jreback) once integrated this would
-
-
-class TestGetitem(base.BaseGetitemTests):
- pass
-
-
-class TestSetitem(base.BaseSetitemTests):
- pass
-
-
-class TestIndex(base.BaseIndexTests):
- pass
-
-
-class TestMissing(base.BaseMissingTests):
- pass
-
-
-class TestMethods(base.BaseMethodsTests):
def test_combine_le(self, data_repeated):
# TODO: patching self is a bad pattern here
orig_data1, orig_data2 = data_repeated(2)
@@ -287,16 +238,6 @@ def test_combine_le(self, data_repeated):
self._combine_le_expected_dtype = object
super().test_combine_le(data_repeated)
-
-class TestCasting(base.BaseCastingTests):
- pass
-
-
-class TestGroupby(base.BaseGroupbyTests):
- pass
-
-
-class TestReduce(base.BaseReduceTests):
def _supports_reduction(self, obj, op_name: str) -> bool:
if op_name in ["any", "all"] and tm.get_dtype(obj).kind != "b":
pytest.skip(reason="Tested in tests/reductions/test_reductions.py")
@@ -351,8 +292,6 @@ def _get_expected_reduction_dtype(self, arr, op_name: str):
raise TypeError("not supposed to reach this")
return cmp_dtype
-
-class TestAccumulation(base.BaseAccumulateTests):
def _supports_accumulation(self, ser: pd.Series, op_name: str) -> bool:
return True
@@ -411,8 +350,6 @@ def check_accumulate(self, ser: pd.Series, op_name: str, skipna: bool):
else:
raise NotImplementedError(f"{op_name} not supported")
-
-class TestUnaryOps(base.BaseUnaryOpsTests):
def test_invert(self, data, request):
if data.dtype.kind == "f":
mark = pytest.mark.xfail(
@@ -423,13 +360,5 @@ def test_invert(self, data, request):
super().test_invert(data)
-class TestPrinting(base.BasePrintingTests):
- pass
-
-
-class TestParsing(base.BaseParsingTests):
- pass
-
-
class Test2DCompat(base.Dim2CompatTests):
pass
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54573 | 2023-08-15T23:35:11Z | 2023-08-16T18:07:21Z | 2023-08-16T18:07:21Z | 2023-08-16T18:26:09Z |
TST: unskip test_close_file_handle_on_invalid_usecols #45566 | diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index f5a0bcd2c00fd..ac171568187cd 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -12,11 +12,6 @@
import pytest
-from pandas.compat import (
- is_ci_environment,
- is_platform_mac,
- is_platform_windows,
-)
from pandas.errors import ParserError
import pandas._testing as tm
@@ -177,9 +172,6 @@ def test_close_file_handle_on_invalid_usecols(all_parsers):
if parser.engine == "pyarrow":
pyarrow = pytest.importorskip("pyarrow")
error = pyarrow.lib.ArrowKeyError
- if is_ci_environment() and (is_platform_windows() or is_platform_mac()):
- # GH#45547 causes timeouts on windows/mac builds
- pytest.skip("GH#45547 causing timeouts on windows/mac builds 2022-01-22")
with tm.ensure_clean("test.csv") as fname:
Path(fname).write_text("col1,col2\na,b\n1,2", encoding="utf-8")
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54572 | 2023-08-15T23:23:49Z | 2023-08-22T22:53:18Z | 2023-08-22T22:53:18Z | 2023-09-11T20:35:56Z |
CI: Ignore numpy.distutils DeprecationWarning from array_api_compat | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 66d8320206429..003fc7192abbc 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -76,7 +76,9 @@ jobs:
- name: "Numpy Dev"
env_file: actions-311-numpydev.yaml
pattern: "not slow and not network and not single_cpu"
- test_args: "-W error::DeprecationWarning -W error::FutureWarning"
+ # TODO: Add back "-W error::DeprecationWarning" once testing on PY312 or
+ # https://github.com/data-apis/array-api-compat/issues/53 is fixed
+ test_args: "-W error::FutureWarning"
# TODO(cython3): Re-enable once next-beta(after beta 1) comes out
# There are some warnings failing the build with -werror
pandas_ci: "0"
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54571 | 2023-08-15T23:14:08Z | 2023-08-16T17:35:54Z | null | 2023-08-16T17:35:57Z |
TST: use single-class pattern for Categorical tests | diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 6f72a6c2b04ae..489f43729a004 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -13,6 +13,9 @@
import pandas._testing as tm
+@pytest.mark.filterwarnings(
+ "ignore:The default of observed=False is deprecated:FutureWarning"
+)
class BaseGroupbyTests:
"""Groupby-specific tests."""
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 33e5c9ad72982..3ceb32f181986 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -72,11 +72,7 @@ def data_for_grouping():
return Categorical(["a", "a", None, None, "b", "b", "a", "c"])
-class TestDtype(base.BaseDtypeTests):
- pass
-
-
-class TestInterface(base.BaseInterfaceTests):
+class TestCategorical(base.ExtensionTests):
@pytest.mark.xfail(reason="Memory usage doesn't match")
def test_memory_usage(self, data):
# TODO: Is this deliberate?
@@ -106,8 +102,6 @@ def test_contains(self, data, data_missing):
assert na_value_obj not in data
assert na_value_obj in data_missing # this line differs from super method
-
-class TestConstructors(base.BaseConstructorsTests):
def test_empty(self, dtype):
cls = dtype.construct_array_type()
result = cls._empty((4,), dtype=dtype)
@@ -117,12 +111,6 @@ def test_empty(self, dtype):
# dtype on our result.
assert result.dtype == CategoricalDtype([])
-
-class TestReshaping(base.BaseReshapingTests):
- pass
-
-
-class TestGetitem(base.BaseGetitemTests):
@pytest.mark.skip(reason="Backwards compatibility")
def test_getitem_scalar(self, data):
# CategoricalDtype.type isn't "correct" since it should
@@ -130,28 +118,6 @@ def test_getitem_scalar(self, data):
# to break things by changing.
super().test_getitem_scalar(data)
-
-class TestSetitem(base.BaseSetitemTests):
- pass
-
-
-class TestIndex(base.BaseIndexTests):
- pass
-
-
-class TestMissing(base.BaseMissingTests):
- pass
-
-
-class TestReduce(base.BaseReduceTests):
- pass
-
-
-class TestAccumulate(base.BaseAccumulateTests):
- pass
-
-
-class TestMethods(base.BaseMethodsTests):
@pytest.mark.xfail(reason="Unobserved categories included")
def test_value_counts(self, all_data, dropna):
return super().test_value_counts(all_data, dropna)
@@ -178,12 +144,6 @@ def test_map(self, data, na_action):
result = data.map(lambda x: x, na_action=na_action)
tm.assert_extension_array_equal(result, data)
-
-class TestCasting(base.BaseCastingTests):
- pass
-
-
-class TestArithmeticOps(base.BaseArithmeticOpsTests):
def test_arith_frame_with_scalar(self, data, all_arithmetic_operators, request):
# frame & scalar
op_name = all_arithmetic_operators
@@ -205,8 +165,6 @@ def test_arith_series_with_scalar(self, data, all_arithmetic_operators, request)
)
super().test_arith_series_with_scalar(data, op_name)
-
-class TestComparisonOps(base.BaseComparisonOpsTests):
def _compare_other(self, s, data, op, other):
op_name = f"__{op.__name__}__"
if op_name not in ["__eq__", "__ne__"]:
@@ -216,9 +174,21 @@ def _compare_other(self, s, data, op, other):
else:
return super()._compare_other(s, data, op, other)
-
-class TestParsing(base.BaseParsingTests):
- pass
+ @pytest.mark.xfail(reason="Categorical overrides __repr__")
+ @pytest.mark.parametrize("size", ["big", "small"])
+ def test_array_repr(self, data, size):
+ super().test_array_repr(data, size)
+
+ @pytest.mark.xfail(
+ reason="Looks like the test (incorrectly) implicitly assumes int/bool dtype"
+ )
+ def test_invert(self, data):
+ super().test_invert(data)
+
+ @pytest.mark.xfail(reason="TBD")
+ @pytest.mark.parametrize("as_index", [True, False])
+ def test_groupby_extension_agg(self, as_index, data_for_grouping):
+ super().test_groupby_extension_agg(as_index, data_for_grouping)
class Test2DCompat(base.NDArrayBacked2DTests):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54570 | 2023-08-15T23:00:46Z | 2023-08-16T20:04:42Z | 2023-08-16T20:04:42Z | 2023-08-16T20:57:42Z |
BUG: inverting object dtype containing bools | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index d1a689dc60830..af9b996a449d6 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -698,7 +698,8 @@ Numeric
- Bug in :meth:`Series.corr` and :meth:`Series.cov` raising ``AttributeError`` for masked dtypes (:issue:`51422`)
- Bug in :meth:`Series.median` and :meth:`DataFrame.median` with object-dtype values containing strings that can be converted to numbers (e.g. "2") returning incorrect numeric results; these now raise ``TypeError`` (:issue:`34671`)
- Bug in :meth:`Series.sum` converting dtype ``uint64`` to ``int64`` (:issue:`53401`)
-
+- Bug in operator in inverting an object-dtype :class:`DataFrame` column or :class:`Series` containing ``bool`` objects giving integer results instead of flipped bools (:issue:`54569`)
+-
Conversion
^^^^^^^^^^
diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index 32641319a6b96..60400d5f52e48 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -38,6 +38,7 @@ i8max: int
u8max: int
def is_np_dtype(dtype: object, kinds: str | None = ...) -> TypeGuard[np.dtype]: ...
+def invert_object_array(values: npt.NDArray[np.object_]) -> npt.NDArray[np.object_]: ...
def item_from_zerodim(val: object) -> object: ...
def infer_dtype(value: object, skipna: bool = ...) -> str: ...
def is_iterator(obj: object) -> bool: ...
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index a96152ccdf3cc..ba956ea6424fe 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -598,6 +598,43 @@ def maybe_booleans_to_slice(ndarray[uint8_t, ndim=1] mask):
return slice(start, end)
+def invert_object_array(ndarray values):
+ # GH#54569
+ cdef:
+ Py_ssize_t i, N = values.size
+ object val, res_val
+ ndarray result = cnp.PyArray_EMPTY(values.ndim, values.shape, cnp.NPY_OBJECT, 0)
+ object[::1] res_flat = result.ravel() # should NOT be a copy
+ cnp.flatiter it = cnp.PyArray_IterNew(values)
+
+ for i in range(N):
+
+ # Analogous to: val = values[i]
+ val = PyArray_GETITEM(values, PyArray_ITER_DATA(it))
+
+ if util.is_bool_object(val):
+ # https://github.com/numpy/numpy/issues/23926
+ # numpy will invert bools to -1 and -2 instead of False and True
+ if val:
+ res_val = False
+ else:
+ res_val = True
+ else:
+ res_val = ~val
+
+ # Note: we can index result directly instead of using PyArray_MultiIter_DATA
+ # like we do for the other functions because result is known C-contiguous
+ # and is the first argument to PyArray_MultiIterNew2. The usual pattern
+ # does not seem to work with object dtype.
+ # See discussion at
+ # github.com/pandas-dev/pandas/pull/46886#discussion_r860261305
+ res_flat[i] = res_val
+
+ cnp.PyArray_ITER_NEXT(it)
+
+ return result
+
+
@cython.wraparound(False)
@cython.boundscheck(False)
def array_equivalent_object(ndarray left, ndarray right) -> bool:
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 99a3586871d10..2e22b7f6a7e1e 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -511,7 +511,11 @@ def to_numpy(
# Ops
def __invert__(self) -> NumpyExtensionArray:
- return type(self)(~self._ndarray)
+ if self.dtype.numpy_dtype == object:
+ res_values = lib.invert_object_array(self._ndarray)
+ else:
+ res_values = ~self._ndarray
+ return type(self)(res_values)
def __neg__(self) -> NumpyExtensionArray:
return type(self)(-self._ndarray)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index be0d046697ba9..366efd36519b3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1510,7 +1510,13 @@ def __invert__(self) -> Self:
# inv fails with 0 len
return self.copy(deep=False)
- new_data = self._mgr.apply(operator.invert)
+ def blk_func(values: ArrayLike):
+ if values.dtype == object:
+ return lib.invert_object_array(values)
+ else:
+ return operator.invert(values)
+
+ new_data = self._mgr.apply(blk_func)
res = self._constructor_from_mgr(new_data, axes=new_data.axes)
return res.__finalize__(self, method="__invert__")
@@ -10241,7 +10247,11 @@ def _where(
# make sure we are boolean
fill_value = bool(inplace)
- cond = cond.fillna(fill_value)
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore", "The 'downcast' keyword in fillna", category=FutureWarning
+ )
+ cond = cond.fillna(fill_value, downcast=False).infer_objects(copy=False)
msg = "Boolean array expected for the condition, not {dtype}"
diff --git a/pandas/tests/frame/test_unary.py b/pandas/tests/frame/test_unary.py
index 5e29d3c868983..60828e0c5070d 100644
--- a/pandas/tests/frame/test_unary.py
+++ b/pandas/tests/frame/test_unary.py
@@ -12,6 +12,31 @@
class TestDataFrameUnaryOperators:
# __pos__, __neg__, __invert__
+ def test_invert_object_bool(self):
+ # GH#54569 numpy's ~ on object dtype mangles bools
+ df = pd.DataFrame(
+ {"A": [True, False, np.bool_(True), np.bool_(False)]}, dtype=object
+ )
+ res = ~df
+ expected = pd.DataFrame(
+ {"A": [False, True, np.bool_(False), np.bool_(True)]}, dtype=object
+ )
+ tm.assert_frame_equal(res, expected)
+
+ res2 = ~df["A"]
+ tm.assert_series_equal(res2, expected["A"])
+
+ # while we're here, check that the numpy behavior is still weird.
+ # If this ever changes, we may be able to remove our overrides
+ res_np = pd.DataFrame(~df.values)
+ assert not res_np.equals(res)
+
+ # while we're here check NumpyExtensionArray
+ arr = df["A"].array
+ res3 = ~arr
+ exp3 = expected["A"].array
+ tm.assert_equal(res3, exp3)
+
@pytest.mark.parametrize(
"df,expected",
[
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Motivation: when trying to deprecate the inplace keyword in `where` in favor of using `mask` with a flipped condition, i found some tests failed because we pass a object-dtype frame of bools and inverting that frame gives weird results. This is because the python behavior is weird:
```
>>> ~True
-2
>>> ~False
-1
```
This has the impact that if we have a bool-dtype Series and invert it, the behavior doesn't pointwise-match the object-dtype behavior. This PR changes the object-dtype behavior to what I think is more reasonable.
If someone were to complain about this diverging from the python/numpy behavior, I'd understand.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54569 | 2023-08-15T22:39:58Z | 2024-01-10T16:21:30Z | null | 2024-01-10T16:21:36Z |
REF: remove unnecessary Block specialization | diff --git a/pandas/_libs/internals.pyi b/pandas/_libs/internals.pyi
index ce112413f8a64..ffe6c7730bcdc 100644
--- a/pandas/_libs/internals.pyi
+++ b/pandas/_libs/internals.pyi
@@ -15,7 +15,6 @@ from pandas._typing import (
)
from pandas import Index
-from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
from pandas.core.internals.blocks import Block as B
def slice_len(slc: slice, objlen: int = ...) -> int: ...
@@ -60,7 +59,7 @@ class BlockPlacement:
def append(self, others: list[BlockPlacement]) -> BlockPlacement: ...
def tile_for_unstack(self, factor: int) -> npt.NDArray[np.intp]: ...
-class SharedBlock:
+class Block:
_mgr_locs: BlockPlacement
ndim: int
values: ArrayLike
@@ -72,19 +71,8 @@ class SharedBlock:
ndim: int,
refs: BlockValuesRefs | None = ...,
) -> None: ...
-
-class NumpyBlock(SharedBlock):
- values: np.ndarray
- @final
- def slice_block_rows(self, slicer: slice) -> Self: ...
-
-class NDArrayBackedBlock(SharedBlock):
- values: NDArrayBackedExtensionArray
- @final
def slice_block_rows(self, slicer: slice) -> Self: ...
-class Block(SharedBlock): ...
-
class BlockManager:
blocks: tuple[B, ...]
axes: list[Index]
@@ -100,7 +88,7 @@ class BlockManager:
class BlockValuesRefs:
referenced_blocks: list[weakref.ref]
- def __init__(self, blk: SharedBlock | None = ...) -> None: ...
- def add_reference(self, blk: SharedBlock) -> None: ...
+ def __init__(self, blk: Block | None = ...) -> None: ...
+ def add_reference(self, blk: Block) -> None: ...
def add_index_reference(self, index: Index) -> None: ...
def has_reference(self) -> bool: ...
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index adf4e8c926fa3..7a9a3b84fd69f 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -24,7 +24,6 @@ cnp.import_array()
from pandas._libs.algos import ensure_int64
-from pandas._libs.arrays cimport NDArrayBacked
from pandas._libs.util cimport (
is_array,
is_integer_object,
@@ -639,7 +638,7 @@ def _unpickle_block(values, placement, ndim):
@cython.freelist(64)
-cdef class SharedBlock:
+cdef class Block:
"""
Defining __init__ in a cython class significantly improves performance.
"""
@@ -647,6 +646,11 @@ cdef class SharedBlock:
public BlockPlacement _mgr_locs
public BlockValuesRefs refs
readonly int ndim
+ # 2023-08-15 no apparent performance improvement from declaring values
+ # as ndarray in a type-special subclass (similar for NDArrayBacked).
+ # This might change if slice_block_rows can be optimized with something
+ # like https://github.com/numpy/numpy/issues/23934
+ public object values
def __cinit__(
self,
@@ -666,6 +670,8 @@ cdef class SharedBlock:
refs: BlockValuesRefs, optional
Ref tracking object or None if block does not have any refs.
"""
+ self.values = values
+
self._mgr_locs = placement
self.ndim = ndim
if refs is None:
@@ -699,51 +705,7 @@ cdef class SharedBlock:
ndim = maybe_infer_ndim(self.values, self.mgr_locs)
self.ndim = ndim
-
-cdef class NumpyBlock(SharedBlock):
- cdef:
- public ndarray values
-
- def __cinit__(
- self,
- ndarray values,
- BlockPlacement placement,
- int ndim,
- refs: BlockValuesRefs | None = None,
- ):
- # set values here; the (implicit) call to SharedBlock.__cinit__ will
- # set placement, ndim and refs
- self.values = values
-
- cpdef NumpyBlock slice_block_rows(self, slice slicer):
- """
- Perform __getitem__-like specialized to slicing along index.
-
- Assumes self.ndim == 2
- """
- new_values = self.values[..., slicer]
- return type(self)(new_values, self._mgr_locs, ndim=self.ndim, refs=self.refs)
-
-
-cdef class NDArrayBackedBlock(SharedBlock):
- """
- Block backed by NDArrayBackedExtensionArray
- """
- cdef public:
- NDArrayBacked values
-
- def __cinit__(
- self,
- NDArrayBacked values,
- BlockPlacement placement,
- int ndim,
- refs: BlockValuesRefs | None = None,
- ):
- # set values here; the (implicit) call to SharedBlock.__cinit__ will
- # set placement, ndim and refs
- self.values = values
-
- cpdef NDArrayBackedBlock slice_block_rows(self, slice slicer):
+ cpdef Block slice_block_rows(self, slice slicer):
"""
Perform __getitem__-like specialized to slicing along index.
@@ -753,22 +715,6 @@ cdef class NDArrayBackedBlock(SharedBlock):
return type(self)(new_values, self._mgr_locs, ndim=self.ndim, refs=self.refs)
-cdef class Block(SharedBlock):
- cdef:
- public object values
-
- def __cinit__(
- self,
- object values,
- BlockPlacement placement,
- int ndim,
- refs: BlockValuesRefs | None = None,
- ):
- # set values here; the (implicit) call to SharedBlock.__cinit__ will
- # set placement, ndim and refs
- self.values = values
-
-
@cython.freelist(64)
cdef class BlockManager:
cdef:
@@ -811,7 +757,7 @@ cdef class BlockManager:
cdef:
intp_t blkno, i, j
cnp.npy_intp length = self.shape[0]
- SharedBlock blk
+ Block blk
BlockPlacement bp
ndarray[intp_t, ndim=1] new_blknos, new_blklocs
@@ -901,7 +847,7 @@ cdef class BlockManager:
cdef BlockManager _slice_mgr_rows(self, slice slobj):
cdef:
- SharedBlock blk, nb
+ Block blk, nb
BlockManager mgr
ndarray blknos, blklocs
@@ -945,18 +891,18 @@ cdef class BlockValuesRefs:
cdef:
public list referenced_blocks
- def __cinit__(self, blk: SharedBlock | None = None) -> None:
+ def __cinit__(self, blk: Block | None = None) -> None:
if blk is not None:
self.referenced_blocks = [weakref.ref(blk)]
else:
self.referenced_blocks = []
- def add_reference(self, blk: SharedBlock) -> None:
+ def add_reference(self, blk: Block) -> None:
"""Adds a new reference to our reference collection.
Parameters
----------
- blk: SharedBlock
+ blk : Block
The block that the new references should point to.
"""
self.referenced_blocks.append(weakref.ref(blk))
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index be0d046697ba9..1d01b66a49edc 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -817,9 +817,7 @@ def swapaxes(self, axis1: Axis, axis2: Axis, copy: bool_t | None = None) -> Self
assert isinstance(new_mgr, BlockManager)
assert isinstance(self._mgr, BlockManager)
new_mgr.blocks[0].refs = self._mgr.blocks[0].refs
- new_mgr.blocks[0].refs.add_reference(
- new_mgr.blocks[0] # type: ignore[arg-type]
- )
+ new_mgr.blocks[0].refs.add_reference(new_mgr.blocks[0])
if not using_copy_on_write() and copy is not False:
new_mgr = new_mgr.copy(deep=True)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ecb9cd47d7995..eca0df67ff054 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -147,7 +147,7 @@ def newfunc(self, *args, **kwargs) -> list[Block]:
return cast(F, newfunc)
-class Block(PandasObject):
+class Block(PandasObject, libinternals.Block):
"""
Canonical n-dimensional unit of homogeneous dtype contained in a pandas
data structure
@@ -1936,7 +1936,7 @@ def pad_or_backfill(
return [self.make_block_same_class(new_values)]
-class ExtensionBlock(libinternals.Block, EABackedBlock):
+class ExtensionBlock(EABackedBlock):
"""
Block for holding extension types.
@@ -2204,7 +2204,7 @@ def _unstack(
return blocks, mask
-class NumpyBlock(libinternals.NumpyBlock, Block):
+class NumpyBlock(Block):
values: np.ndarray
__slots__ = ()
@@ -2242,7 +2242,7 @@ class ObjectBlock(NumpyBlock):
__slots__ = ()
-class NDArrayBackedExtensionBlock(libinternals.NDArrayBackedBlock, EABackedBlock):
+class NDArrayBackedExtensionBlock(EABackedBlock):
"""
Block backed by an NDArrayBackedExtensionArray
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 7bb579b22aeed..98d2934678546 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -263,9 +263,7 @@ def add_references(self, mgr: BaseBlockManager) -> None:
return
for i, blk in enumerate(self.blocks):
blk.refs = mgr.blocks[i].refs
- # Argument 1 to "add_reference" of "BlockValuesRefs" has incompatible type
- # "Block"; expected "SharedBlock"
- blk.refs.add_reference(blk) # type: ignore[arg-type]
+ blk.refs.add_reference(blk)
def references_same_values(self, mgr: BaseBlockManager, blkno: int) -> bool:
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 020fadc359ebd..28c877798125d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -897,7 +897,7 @@ def view(self, dtype: Dtype | None = None) -> Series:
if isinstance(res_ser._mgr, SingleBlockManager):
blk = res_ser._mgr._block
blk.refs = cast("BlockValuesRefs", self._references)
- blk.refs.add_reference(blk) # type: ignore[arg-type]
+ blk.refs.add_reference(blk)
return res_ser.__finalize__(self, method="view")
# ----------------------------------------------------------------------
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Broken off from a PR implementing optional 2D support for EAs. I can't detect any performance difference locally, want to get this in and check the asvs in a few days. | https://api.github.com/repos/pandas-dev/pandas/pulls/54568 | 2023-08-15T22:01:33Z | 2023-08-16T21:54:08Z | 2023-08-16T21:54:08Z | 2023-08-16T21:55:23Z |
fix #54564 | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c35473b852eb9..d8f94608b0b7b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -166,8 +166,7 @@ MultiIndex
I/O
^^^
--
--
+- Bug in :func:`read_excel`, with ``engine="xlrd"`` (``xls`` files) erroring when file contains NaNs/Infs (:issue:`54564`)
Period
^^^^^^
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index c68a0ab516e05..a444970792e6e 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -1,6 +1,7 @@
from __future__ import annotations
from datetime import time
+import math
from typing import TYPE_CHECKING
import numpy as np
@@ -120,9 +121,11 @@ def _parse_cell(cell_contents, cell_typ):
elif cell_typ == XL_CELL_NUMBER:
# GH5394 - Excel 'numbers' are always floats
# it's a minimal perf hit and less surprising
- val = int(cell_contents)
- if val == cell_contents:
- cell_contents = val
+ if math.isfinite(cell_contents):
+ # GH54564 - don't attempt to convert NaN/Inf
+ val = int(cell_contents)
+ if val == cell_contents:
+ cell_contents = val
return cell_contents
data = []
diff --git a/pandas/tests/io/data/excel/test6.xls b/pandas/tests/io/data/excel/test6.xls
new file mode 100644
index 0000000000000..e43a1a67510d8
Binary files /dev/null and b/pandas/tests/io/data/excel/test6.xls differ
diff --git a/pandas/tests/io/excel/test_xlrd.py b/pandas/tests/io/excel/test_xlrd.py
index 509029861715e..6d5008ca9ee68 100644
--- a/pandas/tests/io/excel/test_xlrd.py
+++ b/pandas/tests/io/excel/test_xlrd.py
@@ -1,5 +1,6 @@
import io
+import numpy as np
import pytest
import pandas as pd
@@ -44,6 +45,17 @@ def test_read_xlsx_fails(datapath):
pd.read_excel(path, engine="xlrd")
+def test_nan_in_xls(datapath):
+ # GH 54564
+ path = datapath("io", "data", "excel", "test6.xls")
+
+ expected = pd.DataFrame({0: np.r_[0, 2].astype("int64"), 1: np.r_[1, np.nan]})
+
+ result = pd.read_excel(path, header=None)
+
+ tm.assert_frame_equal(result, expected)
+
+
@pytest.mark.parametrize(
"file_header",
[
| - [x] closes #54564
- [x] Tests added and passed
- [x] All code checks passed
- [x] Added type annotations to new arguments/methods/functions: *No new functions.*
- [x] Added an entry in the latest `doc/source/whatsnew/v2.1.0.rst` file: *-> may this fix make it into a potential v2.0.4 instead?*
| https://api.github.com/repos/pandas-dev/pandas/pulls/54567 | 2023-08-15T21:58:33Z | 2023-08-21T18:40:15Z | 2023-08-21T18:40:15Z | 2023-08-21T18:40:22Z |
ENH: support Index.any/all with float, timedelta64 dtypes | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 43a64a79e691b..d5c8a4974345c 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -265,6 +265,7 @@ Other enhancements
- Many read/to_* functions, such as :meth:`DataFrame.to_pickle` and :func:`read_csv`, support forwarding compression arguments to ``lzma.LZMAFile`` (:issue:`52979`)
- Reductions :meth:`Series.argmax`, :meth:`Series.argmin`, :meth:`Series.idxmax`, :meth:`Series.idxmin`, :meth:`Index.argmax`, :meth:`Index.argmin`, :meth:`DataFrame.idxmax`, :meth:`DataFrame.idxmin` are now supported for object-dtype (:issue:`4279`, :issue:`18021`, :issue:`40685`, :issue:`43697`)
- :meth:`DataFrame.to_parquet` and :func:`read_parquet` will now write and read ``attrs`` respectively (:issue:`54346`)
+- :meth:`Index.all` and :meth:`Index.any` with floating dtypes and timedelta64 dtypes no longer raise ``TypeError``, matching the :meth:`Series.all` and :meth:`Series.any` behavior (:issue:`54566`)
- :meth:`Series.cummax`, :meth:`Series.cummin` and :meth:`Series.cumprod` are now supported for pyarrow dtypes with pyarrow version 13.0 and above (:issue:`52085`)
- Added support for the DataFrame Consortium Standard (:issue:`54383`)
- Performance improvement in :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` (:issue:`51722`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 796aadf9e4061..270d11f7de369 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7204,11 +7204,12 @@ def any(self, *args, **kwargs):
"""
nv.validate_any(args, kwargs)
self._maybe_disable_logical_methods("any")
- # error: Argument 1 to "any" has incompatible type "ArrayLike"; expected
- # "Union[Union[int, float, complex, str, bytes, generic], Sequence[Union[int,
- # float, complex, str, bytes, generic]], Sequence[Sequence[Any]],
- # _SupportsArray]"
- return np.any(self.values) # type: ignore[arg-type]
+ vals = self._values
+ if not isinstance(vals, np.ndarray):
+ # i.e. EA, call _reduce instead of "any" to get TypeError instead
+ # of AttributeError
+ return vals._reduce("any")
+ return np.any(vals)
def all(self, *args, **kwargs):
"""
@@ -7251,11 +7252,12 @@ def all(self, *args, **kwargs):
"""
nv.validate_all(args, kwargs)
self._maybe_disable_logical_methods("all")
- # error: Argument 1 to "all" has incompatible type "ArrayLike"; expected
- # "Union[Union[int, float, complex, str, bytes, generic], Sequence[Union[int,
- # float, complex, str, bytes, generic]], Sequence[Sequence[Any]],
- # _SupportsArray]"
- return np.all(self.values) # type: ignore[arg-type]
+ vals = self._values
+ if not isinstance(vals, np.ndarray):
+ # i.e. EA, call _reduce instead of "all" to get TypeError instead
+ # of AttributeError
+ return vals._reduce("all")
+ return np.all(vals)
@final
def _maybe_disable_logical_methods(self, opname: str_t) -> None:
@@ -7264,9 +7266,9 @@ def _maybe_disable_logical_methods(self, opname: str_t) -> None:
"""
if (
isinstance(self, ABCMultiIndex)
- or needs_i8_conversion(self.dtype)
- or isinstance(self.dtype, (IntervalDtype, CategoricalDtype))
- or is_float_dtype(self.dtype)
+ # TODO(3.0): PeriodArray and DatetimeArray any/all will raise,
+ # so checking needs_i8_conversion will be unnecessary
+ or (needs_i8_conversion(self.dtype) and self.dtype.kind != "m")
):
# This call will raise
make_invalid_op(opname)(self)
diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py
index 977c7da7d866f..8cd295802a5d1 100644
--- a/pandas/tests/indexes/numeric/test_numeric.py
+++ b/pandas/tests/indexes/numeric/test_numeric.py
@@ -227,6 +227,14 @@ def test_fillna_float64(self):
exp = Index([1.0, "obj", 3.0], name="x")
tm.assert_index_equal(idx.fillna("obj"), exp, exact=True)
+ def test_logical_compat(self, simple_index):
+ idx = simple_index
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
+
+ assert idx.all() == idx.to_series().all()
+ assert idx.any() == idx.to_series().any()
+
class TestNumericInt:
@pytest.fixture(params=[np.int64, np.int32, np.int16, np.int8, np.uint64])
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index ffa0b115e34fb..bc04c1c6612f4 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -692,7 +692,12 @@ def test_format_missing(self, vals, nulls_fixture):
@pytest.mark.parametrize("op", ["any", "all"])
def test_logical_compat(self, op, simple_index):
index = simple_index
- assert getattr(index, op)() == getattr(index.values, op)()
+ left = getattr(index, op)()
+ assert left == getattr(index.values, op)()
+ right = getattr(index.to_series(), op)()
+ # left might not match right exactly in e.g. string cases where the
+ # because we use np.any/all instead of .any/all
+ assert bool(left) == bool(right)
@pytest.mark.parametrize(
"index", ["string", "int64", "int32", "float64", "float32"], indirect=True
diff --git a/pandas/tests/indexes/test_old_base.py b/pandas/tests/indexes/test_old_base.py
index f8f5a543a9c19..79dc423f12a85 100644
--- a/pandas/tests/indexes/test_old_base.py
+++ b/pandas/tests/indexes/test_old_base.py
@@ -209,17 +209,25 @@ def test_numeric_compat(self, simple_index):
1 // idx
def test_logical_compat(self, simple_index):
- if (
- isinstance(simple_index, RangeIndex)
- or is_numeric_dtype(simple_index.dtype)
- or simple_index.dtype == object
- ):
+ if simple_index.dtype == object:
pytest.skip("Tested elsewhere.")
idx = simple_index
- with pytest.raises(TypeError, match="cannot perform all"):
- idx.all()
- with pytest.raises(TypeError, match="cannot perform any"):
- idx.any()
+ if idx.dtype.kind in "iufcbm":
+ assert idx.all() == idx._values.all()
+ assert idx.all() == idx.to_series().all()
+ assert idx.any() == idx._values.any()
+ assert idx.any() == idx.to_series().any()
+ else:
+ msg = "cannot perform (any|all)"
+ if isinstance(idx, IntervalIndex):
+ msg = (
+ r"'IntervalArray' with dtype interval\[.*\] does "
+ "not support reduction '(any|all)'"
+ )
+ with pytest.raises(TypeError, match=msg):
+ idx.all()
+ with pytest.raises(TypeError, match=msg):
+ idx.any()
def test_repr_roundtrip(self, simple_index):
if isinstance(simple_index, IntervalIndex):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Match the Series behavior. | https://api.github.com/repos/pandas-dev/pandas/pulls/54566 | 2023-08-15T21:17:24Z | 2023-08-22T19:08:41Z | 2023-08-22T19:08:41Z | 2023-08-22T19:18:04Z |
Named agg tuples Issue #18220 | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 75c816f66d5e4..375f1efa75ef2 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -770,6 +770,7 @@ no column selection, so the values are just the functions.
max_height="max",
)
+
Applying different functions to DataFrame columns
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 8bef167b747e2..7264e5f0eb77a 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1424,6 +1424,28 @@ class DataFrameGroupBy(GroupBy[DataFrame]):
A
1 1.0
2 3.0
+
+ Passing a List of Tuples
+
+ Demonstrates using the `agg` method with a list of tuples for grouping and aggregation.
+ Consider a DataFrame `df`:
+ key | data
+ ----|------
+ 'a' | 0.5
+ 'a' | -1.0
+ 'b' | 2.0
+ 'b' | -0.5
+ 'a' | 1.2
+
+ Example: Applying multiple aggregations - mean and standard deviation
+ df_result = df.groupby('key')['data'].agg([('mean_value', 'mean'), ('std_deviation', 'std')])
+
+ Using a list of tuples provides a concise way to apply multiple aggregations to the same column while controlling
+ the output column names. This approach is especially handy when you need to calculate various statistics on
+ the same data within each group.
+
+ :return: Example of using the `agg` method with a list of tuples for grouping and aggregation.
+ :rtype: None
"""
)
| - [ ] closes #18220
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54563 | 2023-08-15T20:30:27Z | 2023-09-11T16:41:28Z | null | 2023-09-11T17:08:46Z |
Backport PR #54493 on branch 2.1.x (DOC: updated required dependencies list) | diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 1896dffa9a105..0ab0391ac78a9 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -206,6 +206,7 @@ Package Minimum support
`NumPy <https://numpy.org>`__ 1.22.4
`python-dateutil <https://dateutil.readthedocs.io/en/stable/>`__ 2.8.2
`pytz <https://pypi.org/project/pytz/>`__ 2020.1
+`tzdata <https://pypi.org/project/tzdata/>`__ 2022.1
================================================================ ==========================
.. _install.optional_dependencies:
| Backport PR #54493: DOC: updated required dependencies list | https://api.github.com/repos/pandas-dev/pandas/pulls/54562 | 2023-08-15T19:46:26Z | 2023-08-15T21:32:51Z | 2023-08-15T21:32:51Z | 2023-08-15T21:32:51Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.